Font Size: a A A

Study On Hidden Nodes Number Of Neural Networks

Posted on:2020-12-16Degree:MasterType:Thesis
Country:ChinaCandidate:G G YinFull Text:PDF
GTID:2428330590996833Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Artificial neural network?ANN?is a hotspot in recent years.It has been widely used and valued in many fields,such as pattern recognition,speech recognition,face recognition and natural language processing,and has achieved very rapid development.The choice of the number of hidden layer nodes in the network is very important,because it affects the performance of the network,and then affects the actual application,so it is the focus of many scholars.So in this paper,we mainly study the number of hidden layer nodes of two kinds of feedforward neural networks.Back Propagation?BP?neural network is one of the most commonly used feedforward neural networks,and Extreme Learning Machine is proposed by Huang et al.in 2004.Up to now,there is not a complete comparison study on the number of hidden layer nodes of the two feedforward neural networks with Single Hidden Layer.Firstly,this paper studies the number of hidden layer nodes required by ELM and BP neural network when the accuracy reaches local optimum,and finds the effects of the number of hidden layer nodes on the performance of the two algorithms.Numerical experiments demonstrate that the number of hidden layer nodes needed by ELM is more than that of BP neural network when the test accuracy reaches local optimum,but it takes less time.With the increase of the number of hidden layer nodes,the test accuracies of the two networks have the same trends.The choice of the number of hidden layer nodes is very important for ELM.The number of hidden layer nodes affects the performance of the network,in order to ensure the performance of the network,we often choose more hidden layer nodes.However,the more hidden layer nodes in ELM,the more complex network structure.So in order to simplify the network structure,we study an improved ELM algorithm with L1?2 regularization term.L1 norm of weights between hidden layer and output layer is introduced into the original algorithm.An improved L1?2 regularization method is developed,and a variable threshold is used to prune the number of hidden layer nodes.Numerical experiments demonstrate that network has fewer hidden nodes and sparser structure which uses the improved algorithm than ELM algorithm with L1?2 regularization term when the network performance is ensured.
Keywords/Search Tags:Back Propagation(BP) Neural Network, Extreme Learning Machine(ELM), Hidden Layer Nodes, L1?2 Regularization Term, L1 Norm
PDF Full Text Request
Related items