Font Size: a A A

Regularized Sparseness And Relearning Of Feedforward Neural Networks

Posted on:2021-02-15Degree:MasterType:Thesis
Country:ChinaCandidate:M R HanFull Text:PDF
GTID:2428330626460398Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Feedforward neural networks?FNNs?are proposed on the basis of the research results of modern neuroscience.The basic principle is to simulate the interactive reaction mechanism made by biological neural networks after being stimulated by the outside world.Model the functional relationship represented by the input and output samples of the data set.It is widely used in time series prediction,face recognition and simulation modeling.With the development of intelligent computing and the era of big data,the application of feedforward neural networks in real life has encountered problems such as large-scale network embedded systems and slow training of large-scale data sets.Therefore,the development of fast convergence and sparse networks with good generalization ability is gradually paid attention and research.At present,some network sparsity and learning rate adaptive adjustment strategies have appeared,including the smoothing group L1/2/2 regularization method(SGL1/2),RMSProp algorithm,etc.But how to realize the sparseness of large-scale network structure and rapid convergence of large-scale data network is the direction of further research.In addition,the retraining of the decline of network learning ability caused by over-sparse network is also a problem worth studying.This paper investigates the batch gradient descent algorithm with adaptive learning rate of sparse smooth group L1/2/2 regularization(SSGL1/2)in the feedforward neural network processing classification data set.In addition,for the problem of declining learning ability caused by the sparse network,consider the comparison between the two relearning methods which based on sparse structure and initialize sparse network.Then a suitable relearning method is given.Finally,using the 10 sets of data sets from the UCI database to complete the algorithm numerical experiment,and verify its sparsity and rapid convergence.The research of this thesis mainly studies the following four aspects:Firstly,by adding the smooth L1/2/2 regularization term(SL1/2)to the corresponding loss function of the SGL1/2,SSGL1/2/2 is formed.It is verified that the algorithm SSGL1/2/2 enhances the sparsity of layer-to-layer connections compared with SGL1/2.And it finds more redundant nodes compared with SL1/2.Secondly,for batch gradient descent algorithm,the learning rate adaptive adjustment strategy based on network accuracy is investigated.It realizes nonlinear update without presetting of new parameters,and the adjustment process is more in line with the current network training status.An batch gradient descent algorithm with adaptive learning rate of sparse smooth group L1/2/2 regularization is given.Thirdly,we use 10 sets of data sets with different features from the UCI database and Matlab software tools to design numerical experiments.The network sparsity comparison experiments of SSGL1/2,SGL1/2/2 and SL1/2/2 were completed.The experimental results show that SSGL1/2/2 doubles the sparseness of the hidden layer nodes and the connection structure between the hidden layer and the output layer,thus enhancing the network generalization ability.The comparison experiment of different learning rate design strategies is also completed,which including constant type,RMSProp algorithm,AdaDec algorithm,AdaRecur algorithm and the learning rate adaptive algorithm based on network accuracy.The experimental results support the fast convergence of network training of learning rate adaptive algorithm based on network accuracy.Finally,an experimental comparison of two relearning methods based on sparse structure and initialize sparse network is investigated.Then according to the experimental results,a suitable relearning method for sparse network is given.
Keywords/Search Tags:Feedforward neural networks, smooth Group L1/2, sparse smooth Group L1/2, adaptive learning rate, relearning
PDF Full Text Request
Related items