Font Size: a A A

A Study On Direct Supervised Learning Algorithm For Deep Spikliig Neural Networks

Posted on:2021-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:B X ZhaoFull Text:PDF
GTID:2428330614967723Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
Spiking neural networks(SNNs)with biological similarity characteristic are capable of processing spatio-temporal information based on event-driven paradigm,which is considered as an ideal model for high-efficiency neuromorphic hardware.However,training deep SNNs directly is always challenging due to the non-differentiable problem of the neuron model.The efficient learning algorithms for deep SNNs are still lacking and it thus limits its application for SNNs.Motivated by solving such problems,this thesis proposes end-to-end direct supervised learning algorithms for training deep SNNs.The innovations and contributions of this thesis are summarized as below:(1)This thesis proposes a learning algorithm based on discrete Leaky Integrate-andFire(LIF)neuron model and surrogate gradient to train deep SNNs directly,while balancing the computational efficiency,training speed and desired accuracy.In this thesis,an encoder-decoder network architecture is proposed to extract the input spike trains by encoder and convert spike to rate by the decoder,which enables multiple tasks.(2)This thesis optimizes the network architecture to support both spiking and nonspiking inputs based on the discrete C-LIF(Current-based LIF)neuron model.When the network receives non-spiking inputs,the encoder automatically encodes it accurately as spikes.Experimental results show that the learning algorithm achieves classification accuracy of 98.40% and 95.83% on the dynamic neuromorphic datasets MNIST-DVS and DVS-Gestures,respectively,and achieves classification accuracy of 99.58% and 95.97% on static vision datasets MNIST and SVHN,respectively,which are comparable to the existing state-of-the-art results.(3)This thesis proposes the acceleration strategies of backward phase optimization and layer-wise Freeze Out to improve training efficiency and reduce power consumption.Backward phase optimization technique includes two approaches of equidistant backward phases and stochastic backward phases.Layer-wise Freeze Out technique is realized by adaptive learning rate optimization algorithms and three types of preset spacing approaches,including linear spacing,increasing quadratic spacing and decrease quadratic spacing.Experimental results show that the training speed is accelerated by 42.6% with single strategies and 60.0% with hybrid strategies while maintaining the same level of accuracy.
Keywords/Search Tags:Spiking neural network, Surrogate Gradient, LIF neuron model, Backward phase optimization, Layer-wise Freeze Out
PDF Full Text Request
Related items