Font Size: a A A

Improved BP Algorithm In The Application Of Data Warehouse

Posted on:2012-03-04Degree:MasterType:Thesis
Country:ChinaCandidate:Y LvFull Text:PDF
GTID:2178330332499666Subject:Software engineering
Abstract/Summary:PDF Full Text Request
The purpose of this paper is introduct a new improved algorithm through the analysis ofstructureandprincipleandthelimitationsoftheBPalgorithm,Suchaslearningrateisslowof BP algorithm, the network will have some flat areas, the weight change of the networkerror in the flat areas is almost zero, which result in the training process of the wholenetwork stops. The new algorithm is better than the original algorithm in convergencespeed and frequency of training ,which has improved significantly. The paper from threedirections to improve BP algorithm, including standard Weight update formula add newmomentum factor and the scaling factor, and a constant learning rate was adjusted tograduallychange with the amount in training; that gain variation term can modifythe localgradient to give an improved gradient search direction for each training iteration; andLevenberg-MarquardtalgorithmbasedonQuasi-Newtonmethod.In the back-propagation algorithm, learning rate affects the size of the convergencerate of the entire network directly, for the learning rate and the gradient is the direct factorsin the change of weight connection. However, in the standard BP algorithm, the learningrate before training in the network has been initialized to a constant. Constant learning rateof gradient descent is inefficient. Therefore propose a new efficient way for timelyadjustment of learning rate and the learningrate of K time is related to K-1 time. In weightoriginal formula which contain two factors (momentum factor and learning rate factor)adding a new scaling factor to adjust the difference resulting in output and target at eachiteration and the training of the network more accurately and improve the existinglimitationsofthestandardalgorithm.It has been recently shown that a BP algorithm using gain variation term in anactivation function converges faster than the standard BP algorithm. However, it was notnoticed that gain variation term can modifythe local gradient to give an improved gradientsearch direction for each training iteration. This fact allow us to develop and investigateseveral important CG-formulas inordertoimprovetherateof convergence oftheproposedtype of algorithms. This study suggests that a simple modification to the search directioncan also substantially improve the training efficiency of almost all major (well Known anddeveloped) optimization methods. Conjugate gradient method can be seen from theperspective of the formula in precision or precision to the global convergence of linesearch,But this direction has a local propertyand not a global one. Thus anymethod whichmakes use of the gradient vector, can be expected to give the minimum point. So herepropose a improvement of non-conjugate gradient method based on quadratic model. The so-called non-quadratic model is a quadratic function on vector X, and call it as quasisigmoid function. On this basis, modifythe network search direction ,calculate the optimalvalues of learning rate, update the weight connection, calculate the new gradient vector,untiltheendofnetworktraining.Although Newton method regard to the standard neural network algorithm is fasterandmoreeffective,but oneoftheHessianmatrixofcomputational complexityandthecostis larger. Therefore, under normal circumstances Newton's law do not apply as a neuralnetwork training algorithm. The Levenberg - Marquardt method evolved based on agradient descent method and Newton method. This is the iterative solution, in the oppositegradient direction to raise the budget selection parameters for each step. LM Methods ofdampingparametersmayhelptoincreasethestabilityandconvergence.In this paper,inordertofurtherverifythenewalgorithm,usingMatlabneuralnetworktoolbox simulated with the standard algorithm and new algorithm. And verify specificissuessuchas atypicallogicalXORproblemindatawarehouse(This classificationcannotbe solved bylinear model, while the BPneural network can simulate anyproblems and hasastrongnonlinearmappingability),Functionapproximationproblem(BPneuralnetworkismain applications a function mapping the, pattern recognition and function approximation,and a three-layer BP network can complete any of the n-dimensional to m-dimensionalnonlinearmapping).Through experimental comparison, this paper proposed a new algorithm to improvethe standard algorithm and some of the problems that exist in the original algorithmconvergenceratesignificantlyincreasedcomparedtoachievethedesiredresults.
Keywords/Search Tags:Artificial Neural Network, BP Neural Network, Scale Factor, Improved Conjugate Gradient Method, Newton Method, Non-Quadratic Model, Levenberg-Marquardt Algorithm
PDF Full Text Request
Related items