Font Size: a A A

Research Of Deep Spiking Network Network Optimization Based On DNN Training And Conversion

Posted on:2021-03-12Degree:MasterType:Thesis
Country:ChinaCandidate:Y C MaiFull Text:PDF
GTID:2428330611967579Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the development of neuroscience and neural computing,building a brain-inspired cognitive system with low power consumption,low latency,and high precision has become a major hot and difficult issue in artificial intelligence research in recent years.Spiking Neural Network is one of the effective ways to solve the above problems.Due to the lack of effective algorithms for direct training of deep SNN,the researchers propose a method for constructing deep SNN based on Deep Neural Network(DNN)training and conversion,so that the deep SNN have close accuracy to DNN and have low power consumption and low latency.However,due to the huge difference between artificial neuron and biological neuron,there is an approximation error in the conversion process,which leads to a loss of accuracy between deep SNN and DNN.Therefore,we research the approximation error during the conversion process from the network structure,neuron model and data encoding method.On this basis,a new artificial neuron activation function and SNN parameter optimization method are proposed to reduce the approximation error,thereby reducing accuracy loss.The main research includes:(1)Propose activation function Rand Softplus(RSP)which is closer to Leaky Integrate and Fire(LIF)model.In response to the problem that the existing activation functions are difficult to approximate the response of LIF Model,we first perform simulation experiments on LIF Model.By introducing a parameter ? that reflects the randomness of the neuron to model its input-output response,an activation function RSP that is closer to the response characteristic of LIF model is proposed.Finally,it is used for DNN network training to reduce the accuracy loss caused by the difference between artificial neuron and LIF neuron models.(2)Proposed the parameter optimization method for SNN.We studies the existing parameter optimization methods in SNN.In response to the problem that the existing threshold optimization methods cause some neurons to be difficult to activate,we propose an adaptive adjustment optimization method for the Neuron threshold,which makes the threshold dynamically adjusted according to the input data,thus promoting the transmissionof network information;In response to the additional approximation error caused by the static setting of the compensation factor,we introduce a spike computer system to obtain the real-time firing frequency and realize the dynamic calculation of the compensation factor.It not only solves the problem that the firing rate decreases as the number of layers increases but also reduces the approximation error.Experimental results show that the proposed activation function not only reduces the accuracy loss of Deep SNN,but also improves the network's noise immunity.For the parameter optimization method,compared with the existing research,the network based on our method shows better performance on the CIFAR-10 Dataset.
Keywords/Search Tags:Spiking Neural Networks, Biological Neurons, Activation Function, Parameter Optimization
PDF Full Text Request
Related items