Font Size: a A A

Research On The Relative Ordering Learning And The Conversion Algorithm Of Spiking Neural Networks

Posted on:2019-07-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z T LinFull Text:PDF
GTID:1368330572468690Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
As the third generation of the artificial neural network,the spiking neural network is more biologically plausible and more computationally efficient,and is one of the important research areas for future brain-like computing and artificial intelligence.However,due to the lack of relevant theories,such as spike coding and computing,the existing learning algorithms have low training efficiency and the effect is not satisfactory,resulting in the low practicability of current spiking neural networks.Developing more efficient learning algorithms and the direct DNN-to-SNN conversion method are two important ways to improve the performance of spiking neural networks.This paper focuses on the issue of improving the practicality of spiking neural networks,and studies the new learning algorithms based on the relative ordering of spikes and low-latency,low-cost spiking conversion methods.The main content and contributions of the research include:1.Learning algorithm based on the relative ordering of spikes.The existing spiking neural networks use fixed and accurate spikes set by human as learning targets,which suffers from a potential problem that the input spike pattern does not match the learning target.This paper proposes the relative ordering learning(ROL)algorithm,which does not require the design of specific spikes as learning targets.The spike order of neurons in output layer is used as the supervisory signal to guide the update of synaptic weights.For different states of neurons,different error functions are used to improve the learning ability and efficiency of neurons.The rich experiments verify the high robustness,great learning efficiency and excellent generalization ability of the relative ordering learning algorithm.Finally,compare the difference between the spiking network trained by the ROL algorithm and the traditional artificial neural network.With the same size of network,the recognition performance of spiking model is comparable to that of the MLP model,but only about 15%of the computing load of the MLP model is required for the spiking model,highlighting the high energy efficiency of the spike mechanism.2.Low-inference-latency spiking neural networks based on activation quantization.The spiking model converted from deep neural network by the traditional methods requires a long simulation time to generate plenty of spikes to simulate the input and output relationships of the source network to obtain the comparable recognition performance.By analyzing the inherent requirements of the equivalent replacement between activation function and the spiking neuron,the positive relationship between the inference delay of spiking neural network and activation value is revealed,and a method of deep spiking neural network realization based on activation value quantization is proposed.For the quantization of the activation value,a layer-by-layer quantification algorithm based on retraining is proposed.The optimal quantizing resolution is obtained by the method of equal interval scanning and L2 quantization error minimization,and the layer-by-layer quantification and retraining are performed to obtain the performance-lossless quantified network model.Spiking conversion based on the quantized network requires only a small number of spikes to accurately simulate the proportional relationship between the activation values,reducing the inference delay and the load of the converted spiking network.3.Spiking conversion methods of pooling layer and softmax layer in convolutional neural network.The spiking conversion of convolutional neural network is an important way to improve its real-time performance and energy efficiency.Aiming at the complex and costly defects of the existing conversion methods,the pooling incorporation technology and the direct mapping method of the softmax layer are proposed.By attenuating the convolution layer weights and labeling the convolutional layer spikes as pooling spikes,the average pooling function is integrated into the convolutional spiking neurons;The inhibitory synaptic connections between the neurons in the pooling region are realized and the spikes generated in the pooling region are labeled as corresponding pooling spikes,thus the max-pooling function is integrated into the convolutional spiking neuron.By changing the behavior pattern of the neurons in the output layer so that they only accumulate the membrane potential without emitting spikes,the spike form of the softmax layer can be realized by directly using the weight of the softmax layer.For a typical convolution-pooling stacking network(2*2 pooling),these methods reduce the converted spiking model by about 20%for neurons and 80%for the number of spikes.
Keywords/Search Tags:spiking neural network, supervised learning, relative ordering learning, spiking conversion, activation value quantization, layer-by-layer retraining, average pooling incorporation, max-pooling incorporation
PDF Full Text Request
Related items