Improved spikeprop algorithm for neural network learning

Spiking Neural Network (SNN) utilizes individual spikes in time domain to communicate and to perform computation in a manner like what the real neurons actually do. SNN had remained unexplored for many years because it was considered too complex and too difficult to analyze. Since Sander Bothe intro...

Full description

Saved in:
Bibliographic Details
Main Author: Ahmed, Falah Y. H.
Format: Thesis
Language:English
Published: 2013
Subjects:
Online Access:http://eprints.utm.my/id/eprint/33796/5/FalahYHAhmedPFSKSM2013.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-utm-ep.33796
record_format uketd_dc
spelling my-utm-ep.337962017-07-18T06:14:34Z Improved spikeprop algorithm for neural network learning 2013-05 Ahmed, Falah Y. H. QA Mathematics Spiking Neural Network (SNN) utilizes individual spikes in time domain to communicate and to perform computation in a manner like what the real neurons actually do. SNN had remained unexplored for many years because it was considered too complex and too difficult to analyze. Since Sander Bothe introduced SpikeProp as a supervised learning model for SNN in 2002, many problems which were not clearly known regarding the characteristics of SNN have now been understood. Despite the success of Bohte in his pioneering work on SpikeProp, his algorithm is dictated by fixed time convergence in the iterative process to get optimum initial weights and the lengthy procedure in implementing the sequence of complete learning for classification purposes. Therefore, this thesis proposes an improvement to Bohte’s algorithm by introducing acceleration factors of Particle Swarm Optimization (PSO) denoted as Model 1; SpikeProp using ? Angle driven Learning rate dependency as Model 2; SpikeProp using Radius Initial Weight as Model 3a, and SpikeProp using Differential Evolution (DE) Weights Initialization as Model 3b.The hybridization of Model 1 and Model 2 gives Model 4, and finally Model 5 is obtained from the hybridization of Model 1, Model 3a and Model 3b. With these new methods, it was observed that the errors can be reduced accordingly. Training and classification properties of the new proposed methods were investigated using datasets from Machine Learning Benchmark Repository. Performance results of the proposed Models (for which graphs of time errors with iterative timings, table of number of iterations required to reduce time error measurement to saturation level and bar charts of accuracy at saturation time error for all the datasets have been plotted and drawn up) were compared with one another and with the performance results of Standard SpikeProp and Backpropagation (BP). Results indicated that the performances of Model 4, Model5 and Model 1 are better than Model 2, Model 3a and Model 3b. The findings also reveal that all the proposed models perform better than Standard SpikeProp and BP for all datasets used. 2013-05 Thesis http://eprints.utm.my/id/eprint/33796/ http://eprints.utm.my/id/eprint/33796/5/FalahYHAhmedPFSKSM2013.pdf application/pdf en public http://dms.library.utm.my:8080/vital/access/manager/Repository/vital:70130?site_name=Restricted Repository phd doctoral Universiti Teknologi Malaysia, Faculty of Computing Faculty of Computing
institution Universiti Teknologi Malaysia
collection UTM Institutional Repository
language English
topic QA Mathematics
spellingShingle QA Mathematics
Ahmed, Falah Y. H.
Improved spikeprop algorithm for neural network learning
description Spiking Neural Network (SNN) utilizes individual spikes in time domain to communicate and to perform computation in a manner like what the real neurons actually do. SNN had remained unexplored for many years because it was considered too complex and too difficult to analyze. Since Sander Bothe introduced SpikeProp as a supervised learning model for SNN in 2002, many problems which were not clearly known regarding the characteristics of SNN have now been understood. Despite the success of Bohte in his pioneering work on SpikeProp, his algorithm is dictated by fixed time convergence in the iterative process to get optimum initial weights and the lengthy procedure in implementing the sequence of complete learning for classification purposes. Therefore, this thesis proposes an improvement to Bohte’s algorithm by introducing acceleration factors of Particle Swarm Optimization (PSO) denoted as Model 1; SpikeProp using ? Angle driven Learning rate dependency as Model 2; SpikeProp using Radius Initial Weight as Model 3a, and SpikeProp using Differential Evolution (DE) Weights Initialization as Model 3b.The hybridization of Model 1 and Model 2 gives Model 4, and finally Model 5 is obtained from the hybridization of Model 1, Model 3a and Model 3b. With these new methods, it was observed that the errors can be reduced accordingly. Training and classification properties of the new proposed methods were investigated using datasets from Machine Learning Benchmark Repository. Performance results of the proposed Models (for which graphs of time errors with iterative timings, table of number of iterations required to reduce time error measurement to saturation level and bar charts of accuracy at saturation time error for all the datasets have been plotted and drawn up) were compared with one another and with the performance results of Standard SpikeProp and Backpropagation (BP). Results indicated that the performances of Model 4, Model5 and Model 1 are better than Model 2, Model 3a and Model 3b. The findings also reveal that all the proposed models perform better than Standard SpikeProp and BP for all datasets used.
format Thesis
qualification_name Doctor of Philosophy (PhD.)
qualification_level Doctorate
author Ahmed, Falah Y. H.
author_facet Ahmed, Falah Y. H.
author_sort Ahmed, Falah Y. H.
title Improved spikeprop algorithm for neural network learning
title_short Improved spikeprop algorithm for neural network learning
title_full Improved spikeprop algorithm for neural network learning
title_fullStr Improved spikeprop algorithm for neural network learning
title_full_unstemmed Improved spikeprop algorithm for neural network learning
title_sort improved spikeprop algorithm for neural network learning
granting_institution Universiti Teknologi Malaysia, Faculty of Computing
granting_department Faculty of Computing
publishDate 2013
url http://eprints.utm.my/id/eprint/33796/5/FalahYHAhmedPFSKSM2013.pdf
_version_ 1747816187260043264