The human brain has an efficient way of calculating and uses very little energy to make quick and accurate judgments about the external environment.At present,the artificial neural network has realized the information processing mode at the macro level of the brain,which has made great achievements in the field of artificial intelligence.However,artificial neural network requires massive data and powerful computing power,so it is difficult to deploy on edge devices with limited resources.As a new computing method,brain-inspired computing has found a breakthrough from the microscopic operation mechanism of brain neurons,and built a brain-inspired Spiking neural network model with low power consumption and low delay.However,the current theoretical development and application of SNN are limited by many aspects,including the lack of reasonable interpretation of the network architecture,the lack of efficient learning algorithms,and the lack of effective coding methods.Based on this,the following research and exploration work is carried out in this thesis:1.This thesis proposes a brain-inspired Spiking neuron model with biological inhibition mechanism(LISNM)and extends it to a brain-inspired Spiking neural network model which can be iteratively calculated in temporal and spatial domains.The model takes into account inhibition between interconnected neurons.When one neuron is stimulated and generates excitement,it stimulates nearby neurons,and the excitement of the latter has an inhibitory effect on the former.Therefore,the model can not only reflect the phenomenon that intra-layer neurons are inhibited by adjacent neurons,but also enable the information of inter-layer neurons to propagate forward through spatial and temporal domains.2.This thesis proposes a cross-scale learning algorithm(GLHSTBP).This method takes into account both the macroscopic information computation of biological brain and the microscopic information processing mechanism of biological neurons,and combines global supervised learning and local unsupervised learning.In global learning,it defines a negative log-likelihood measure error function based on the number of spikes.The error information propagates backward from the output layer through time and space,and the synaptic weight is updated by the error information.Besides,a gradual surrogate gradient learning method is proposed,it can not only solve the problem of non-conductance of spike state to voltage in the training of multi-layer Spiking neural networks,but also ensure that the network can get enough information at the beginning of training for effective training,and get more accurate gradient information to participate in the classification task before the end of training.In local learning,the sensitivity factor of neurons is defined.The more neuron spikes are sent,the smaller the sensitivity factor is.The update of synaptic weight within the layer is related to the sensitivity factor of the neurons within the layer.3.In order to verify the effectiveness of the proposed LISNM-based brain-inspired Spiking neural network and the GLHSTBP learning algorithm,this thesis designs a novel coding scheme combining the time-to-first-spike coding method and rate coding to convert input information into spike train.Then,by applying the model and algorithm to UCI machine learning classification task,handwritten digital image recognition task and speech signal recognition task,the final results fully prove the high efficiency of the network model and algorithm. |