Font Size: a A A

Research Of Neural-Network Weight Normalization

Posted on:2018-06-23Degree:MasterType:Thesis
Country:ChinaCandidate:S DuFull Text:PDF
GTID:2348330515478253Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Artificial Neural Network,known as neural network,is a number of information processing units connected to each other to simulate the biology of cells in the process of information processing.The model originated in 1943 by Warren Mc Cullough and Logistics Walter co-presented,but at that time did not cause widespread concern in academia.Minsky and Papert published a book in 1969 describing the dilemma of the 1960 s neural network study.It said the researchers at the time of the neural network and the results of the research are few.Nowadays,the neural network and its evolutionary deep learning is undoubtedly one of the hottest topics of the moment,tens of thousands of scientific researcher and engineering staff are engaged in it.Neural network's return to people's vision,which are Thanks to the neural network in the picture recognition,artificial intelligence,business recommendation of its outstanding performance.Like other machine learning algorithms,neural network also has the problem of over-fit in training.The reason of over-fitting is mainly because the training sample is few,which leads to the network do not learn the whole distribution of the sample.From another angle,it's indicating that the network model is so complex that fit the noise data in the data set,rather than just learning from the correct data.Many scholars have proposed some techniques to prevent over-fitting of the network,among which the most widely used canonical technology is applied.The normalization of the weights of the neural network reduces the over-fitting of the model by increasing the penalty phase of the weight,preventing the generation of too complex networks under limited training data.It needs non-human intervention.It is important to find a suitable and effective method for the generalization of the network.There are two main types of neural network normalization method;one is based on the method of weight divergence.For example,the L2 normalization method is to find the fastest descend direction in the Euclidean space,and the L1 normalization method is to find the fastest descend direction in the linear space.Furthermore,when the objective function is a convex function,the convergence rate of the network is consistent with the normalization method.The convergence rate of the network is reduced by the L2 method.The convergence rate of the network weight is the second nonlinear.L1-constrained network's convergence is linear.Another widely applied normalization method is the dropout approach.This method does not use the apparent way of the weight penalty phase,But rather a more positive way to directly hidden some of the weight of the connection,thus inhibiting the network over-fitting.Whether it is now widely used L1 or L2 normalization method,it can be seen from the definition of the form,they are based on the global network weight constraints,L2 normalization is the weight of each component after the square sum of the square root,when the limit L2 normalization When the result is small,each weight will be small.Similarly,L1 also has a similar situation.The difference is that the partial weight of the L1 constraint will be zero.If there is an effective localized method based on the weight of the path,it will be an important role in promoting,on the neural network research.In this paper,we discuss various parameters that affect the performance of neural network,including the method of weight initialization,the form of loss function,and the type of active function.Based on the characteristics of the unbalanced network,we propose a local normalization method based on neuron connection.The normalization value of a neuron is dependent on the cost of the forward node connected to it and the nodal value of the connected layer.Based on the adjustment of the linear neuron Invariant nature,on the basis of the L2 normalization,add localized penalty normalized weight.In this paper,the experimental results are given on experiment with L1 normalization method,L2 normalization method and dropout method and the condition without normalization on the MINST data set.The experimental results show that the localized weight normalization algorithm based on propagation path proposed in this paper has positive significance to prevent network over-fitting and improve classification accuracy.
Keywords/Search Tags:Neural-network, weight-normalization, over-fit
PDF Full Text Request
Related items