Font Size: a A A

Research On BP Neural Network Learning Based On Particle Swarm Optimization And Simulated Annealing Algorithm

Posted on:2014-02-09Degree:MasterType:Thesis
Country:ChinaCandidate:L JiangFull Text:PDF
GTID:2248330398979902Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the rapid development of intelligent evolutionary algorithms and computer science techniques, neural network learning methods have been used more and more widely in many real application areas. As one of the important feed-forward neural network models, BP (Back-propagation) neural network is a kind of multilayer feed forward network with the nature of error back propagation. It usually has the ability to handle large-scale parallel nonlinear data and fault tolerance in real applications. As a basic learning algorithm for BP network, it is usually efficient and useful and simplicity implement. Although, BP network has many worthwhile feature points, it usually has some limitations in nature. The limitations generally contain slow convergence, easily trapped in local optimum and weak generalization ability. For these limitations, many kinds of improvements have been proposed. One popular way is to use intelligent evolutionary algorithms such as particle swarm optimization (PSO) and simulated annealing (SA) algorithm to optimize the structure of the BP neural network. However, it should be noted that these intelligent evolutionary algorithm based BP neural network learning methods generally replace BP algorithm by these intelligent evolutionary algorithms and use these evolutionary algorithms directly in the BP network training process. Therefore, they usually ignore the nature of error back propagation of BP algorithm which plays an important role in BP neural network training process. The main contribution of this thesis is to further develop some improvements on the BP network model and algorithms by developing the current intelligent evolutionary algorithm based BP network learning methods. Specifically, it focuses on proposing some kinds of the new collaborative and hybrid algorithms for BP network by further adding the error back propagation mechanism into the traditional intelligent evolutionary algorithm based BP network learning algorithms. These collaborative and hybrid algorithms can further improve the optimization ability of the traditional single methods and enhance the learning and generalization ability for BP neural network. This thesis mainly includes collaborative training method for the BP network parameters, hybrid network learning method based on particle swarm optimization and BP and hybrid network learning algorithm based on particle swarm optimization and simulated annealing.The main contributions and novelties in this thesis are follows: (1) For the standard BP algorithm usually has the limitations of slow convergence, easily trapped in local optimum in BP network training process and traditional particle swarm optimization based BP network training algorithms ignore the nature of error back propagation in general, a new collaborative BP neural network learning algorithm is proposed based on error back propagation mechanism and particle swarm optimization. The main idea of the proposed algorithm is to update the network parameters (weights and thresholds) by integrating both PSO and BP update algorithms simultaneously. It is achieved by combine the strength of PSO and BP at the same time. Therefore, it can integrate the benefits of PSO and BP in the training process simultaneously. The proposed algorithm us evaluated by using simulation test of five typical complex functions. Simulation experimentral results on several complex functions show that the proposed algorithm can enhance the ability of learning and generalization for BP network with faster convergence rate and higher precision and perform better than other two kinds of optimized BP network models.(2) For the traditional PSO optimization based BP learning algorithms usually trap in local optimum, a novel hybrid network training algorithm is proposed. In the traditional PSO optimization process, because of the "collection" of the particles and thus the "premature" phenomenon, they usually trap in local optimum. Furthermore, the particles are distributed between the particle personal and group global extremes when they are trapped in "collection". This motivate us to provide a hybrid network parameters training algorithm to the BP network by further adjusting and disturbing the particle personal and group global extremes. The proposed hybrid algorithm in some way can prevent the occurrence of particles "collection" and "premature", and thus further enhance the ability to jump out of local optimum with faster convergence rate. Simulation experimentral results on several complex functions show that the proposed hybrid algorithm perform better than BP algorithm and the traditional PSO based BP learning method.(3) For the standard BP algorithms usually trap into local optimum in searching for the optimal solution for the problem, a new hybrid optimization method based on particle swarm optimization and BP is proposed to achieve BP network training and learning. The method first updates the weights and thresholds of the BP network using PSO and BP update algorithm, respectively. Then, it uses an optimal choice strategy to merge these two update processes simultaneously. The proposed hybrid method in some way prevents the "collection" and "premature" phenomenon for PSO and thus can have the ability to jump out of local minima. The algorithm is applied to the simulation experiment on several complex functions, and compared it with standard BP network and traditional PSO based BP network training algorithm. Experimental results show that the proposed algorithm performs better than the other two traditional BP network optimization algorithms, indicating the benefits of the proposed hybrid algorithm.(4) Based on the collaborative update algorithm of network parameters proposed in this thesis, a new BP network optimization algorithm is proposed by further using the SA optimization framework and considering the probabilistic random search strategy with global search ability. The algorithm first updates the weight of the BP network by integrating the particle swarm optimization and BP method simultaneously and proposes a collaborative updating algorithm based on PSO and BP. Then, it further improves the optimal process by adapting the SA algorithm and presents a new network learning algorithm. It determines the superior and inferior of the new generated solution using the pre-defined fitness value and accepts the superior and inferior solutions with the pre-defined great and small probabilities, respectively. Therefore, it can retain the diversity of particles in the PSO optimization process and thus further improve the global search ability of the algorithm. The algorithm is applied to the simulation experiment on four typical complex functions, and compared it with other two models including standard BP network and traditional PSO based BP algorithm. Experimental results show that the proposed algorithm performs better than the other two BP network optimization algorithms.
Keywords/Search Tags:BP algorithm, Error back propagation, Gradient descent, Particle swarmoptimization, Simulated annealing
PDF Full Text Request
Related items