Font Size: a A A

Research On Evolutionary Algorithms For High Dimensional Optimization And Their Applications

Posted on:2010-08-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:C X XiaoFull Text:PDF
GTID:1118360278957270Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Evolutionary algorithm(EA) is an evolutionary computation technique for searching and optimization inspired by biological evolution. Its major characteristics are colony searching tactics and exchange of information between individuals in colony. Its searching process does not rely on the gradientinformation. It mainly includes genetic algorithm, evolutionary strategyand evolutionary programming, also called as Generalized Genetic Algorithm. EA is simple in concept, few in parameters, and easy in implementation. It was proved to be an efficient method to solve optimization problems, and has successfully been applied in the area of function optimization, neural network training and fuzzy control systems, etc. However, both theory and application of EA are still far from mature.This paper put the emphasis on the several optimization problems in high dimensionality drawed out from domain of Multiple Mobile Robots System. The dissertation focuses on the principles, theory, and application of EA, especially, an in-deep and systemic study on how to improve the conventional EA algorithm, solving the problems such as high-dimensional function optimization, constrained optimization, evolving neural network, large-scale combinatorial optimization problem and soft fault diagnosis in muti-mobile robots control systems. The main achievements of this dissertation include:A new approach is proposed to handle high dimensional constrained optimization problems and high dimensional global numerical optimization in real domain using evolutionary algorithm based on Good Point set. It focuses on the effecitve constraint-handling techniques and the efficient evolutionary algorithms. Based on the successful experiences from orthoganal method, differential evolution and Particle Swarm Optimization, the proposed method is not related to the dimensionality of searching space. The gene operators of the new method are constructed according to Good Point set principle whose precision is not be confined to the dimension of the search space. It is a better idea for high dimensional optimization problems than orthogonal approach. The results obtained of benchmark functions show that the new approach is a general, effective and robust method.An evolutionary strategy based on Good Point Set and Defferential Evolution is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network. Adopting code of Leung, the method remains both the global search ability of Defferential Evolution and the good local search ability of Good Point Set. The local optimums are effectively avioded, especially in the situation with huge parameters. The numbers of hidden nodes and the links of the feedforward neural network are chosen by increasing them from small numbers until the learning performance is good enough. As a result, a partially connected feedforward neural network can be obtained after tuning. This implies that cost of implementing the proposed neural network, in terms of hardware and processing time, can be reduced.A novel encoding approach is proposed which can construct a mapping from a discrete space to a continuous section for solving the lagre-scale combinatorial optimization problems drawed out from task allocation and path planning of robot. Based on the new encoding scheme and assembled with the successful mechanism of the evolutionary algorithms, the performance of the proposed algorithm has largely promoted. There is one to one mapping between the new codes and the combinational vectors. The new encoding scheme always generates feasible solutions, which can help algorithm to avoid redundant computation existing in some algorithms effectively. Based on the new encoding scheme and theoretical proof, the search space is further reduced. Second, an excellent queue is added in evolutionary mechanism combined with particular learning strategy. During the evolutionary course, the excellent queue is refreshed frequently. This can help the proposed algorithm maintain the relatively good gene blocks. Finally, its convergence to global optimal solution with probability one is proved. The numerical experiments show the effectiveness of the algorithm.A new negative selection algorithm based on (μ+λ) -ES is proposed, which solves the problems of soft fault diagnosis in muti-mobile robots control systems. Assembling the Hamming and r-continuous match rules with the signal-to-noise ratio method, detectors of the whole system are distributed more even. Compared with the traditional negative selection algorithm, the new algorithm is no longer blindness in searching mature detectors for the evolutionary strategy. Especially for the case on large-scale self, the new algorithm can generate the mature detectors rapidly and accurately also. The results obtained show that both the number of iteration to generate mature detectors and the number of holes decline quickly, while the rate of detecting abnormity raises.
Keywords/Search Tags:good point set, evolutionary algorithm, evolving neural network, large-scale combinatorial optimization, negative selection, soft fault diagnosis
PDF Full Text Request
Related items