Font Size: a A A

Research On The Beneficial Effects Of Noise In Superthreshold Stochastic Resonance Neural Networks And Classification Algorithm

Posted on:2023-09-13Degree:MasterType:Thesis
Country:ChinaCandidate:X J LiuFull Text:PDF
GTID:2568306833964799Subject:Systems Science
Abstract/Summary:PDF Full Text Request
In a nonlinear system composed of a single threshold function,adding an appropriate amount of noise will enhance the output response of the nonlinear system to a subthreshold input signal,that is,the stochastic resonance phenomenon.The stochastic resonance phenomenon mainly focuses on the positive role of noise on system performance,and then become an explanation mechanism for neuronal stochastic plasticity in the field of neuroscience.In view of the collective characteristics of a large number of neurons in the transmission of biological information,studies have shown that when the multiple threshold units in parallel response to the suprathreshold input signal,the addition of mutually independent and identically distributed noise components will also produce a stochastic resonance phenomenon,that is,suprathreshold stochastic resonance.The positive role of added noise in improving system performance can be also called noise benefits.This paper mainly studies the benefit of noise in suprathreshold stochastic resonance neural network and nearest neighbor classification algorithm.Firstly,a threshold-type neural network model based on the suprathreshold stochastic resonance mechanism is constructed.Among the different activation functions of neurons,the threshold function has the advantages of low hardware production cost and easy implementation compared with the continuously differentiable activation function.The threshold network has a simple calculation process after training,and the output value can only be 0 or 1,which is especially suitable for accomplishing various classification tasks.However,the threshold function has the characteristics of a non-differentiable point at the response threshold and otherwise zero gradient,so it is impossible to use the gradient descent-based error backpropagation(BP)algorithm to train a threshold network.Inspired by the phenomenon of suprathreshold stochastic resonance,we add independent and identically distributed noise samples to the parallel array composed of threshold units.Each array can be regarded as a suprathreshold stochastic resonance model.Theoretical analysis shows that when the number of threshold units in the array is large enough,the output mean of the array asymptotically tends to the expectation of the threshold function with respect to the probability density function of the added noise.Thus,the suprathreshold stochastic resonance model can be regarded as an artificial neuron.The activation function of this neuron becomes a continuously differentiable function,and this transformation also lays the foundation for training the threshold network using the BP algorithm.Therefore,in the training process of network,the noise probability density in the noise smoothing threshold activation function,such as noise intensity,are also updated and optimized by stochastic gradient descent algorithm,until the noise intensity converges to a non-zero optimal value.This network training algorithm is called noise-boosting BP algorithm.Further,we theoretically analyze the local optimal solution of the network parameters and the convergence of the noise boosting BP algorithm for training the designed neural network.In the testing phase,starting from the practical ease of implementation,the noise-smoothed threshold activation function is implemented by the threshold array and the non-zero optimal noise obtained by training,and the output of the neuron is the average of the outputs of each array threshold unit.The threshold network trained by the noise boosting BP algorithm is used for data classification and handwritten digit picture recognition.Compared with the test classification accuracy obtained by the full-precision network formed by the continuous activation function,the designed low-precision threshold network can achieve the same or even higher classification accuracy.To summarise,the benefits of noise in the threshold network are: firstly,the addition of noise transforms the neuron function of the threshold network into a smooth differentiable activation function,making it possible to train the network using stochastic gradient descent algorithm;secondly,the appropriate noise intensity increases the test classification accuracy of the threshold network.In order to study the noise injection in the improvement of the generalization ability of the pattern recognition model,this paper also studies the convergence of the noise intensity when the added noise in each hidden layer of the threshold network obeys different distributions.Under these conditions,the classification accuracy of the threshold network for different test data sets are compared.In addition,this paper further studies the addition of noise components into the training data and the k-nearest neighbor algorithm for data classification.It is found that the artificial noise injection can also improve the classification accuracy of the algorithm.The results show that the noise injection method has certain universality in improving the generalization ability of the pattern recognition model.
Keywords/Search Tags:Noise benefit, Threshold neural network, Suprathreshold stochastic resonance, Pattern classification, Noise-smoothed activation function, k-nearest neighbor
PDF Full Text Request
Related items