Font Size: a A A

Study On Performance Of Intelligent Optimization Algorithms And Search Space

Posted on:2008-10-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y C GaoFull Text:PDF
GTID:1118360212494395Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Optimization has received widespread attention as an important scientific branch. It has influenced many disciplines greatly and abtained rapid promotion and applications in so many engineering domains. Optimization has become an indispensable tool for many different areas.Optimization algorithms get solutions satisfying the requirments of users by some certain approch, based on some kind of idea or mechanism. The optimization theory develops rapidly for optimization problems such as linear programming, nonlinear and stochastic programming, geometric programming and integer programming, etc., and the approaches emerge with increasingly widespread practical applications. Computer technology in the promotion, optimization theory and methods in economic planning, engineering design, production management, transportation and other areas, obtained widespread applications, which becomes an extremely active discipline.Many optimization problems such as the traveling salesman problem, and equipment layout, production scheduling, etc., have been proved to be NP-Complete problems. So far there is no effective method of polynomial time for NPC, and the computing time needed by a traditional optimization method has an exponential relationship with the scale of the problem. Thus, instead, researchers developed a lot of random search algorithms such as Monte Carlo Simulation, Simulated Annealing, Genetic Algorithms, Particle Swarm Optimizer, Tabu Search and hybrid optimization strategies, hopping to achieve optimal or near-optimal solutions in limited time.As a new type of random search algorithms, intelligent optimization ones generally reffer to those gradually developed based on the similarity of natural biological systems with the optimization process. They get next generation of solutions through operating on a set of solutions in the search space according to certain statistical rules. Therefore, the search mechanism of an algorithm determines its global and local search ability, and its search performance, in other words, if the algorithm can converge to global optima quickly. At the same time, because of variable assignement and complicated constraints, feasibale and infeasible solutions often distribute across in search space, which results in an irregular spatial structure. Therefore, an intelligent algorithm spends most time on invalid searching on infeasible solutions, which influences its search speed greatly, and it can not converge to a global optimum. So, this paper mainly focuses on the search capacity of intelligent algorithms, to improve their convergence speed and solution quality, and avoid them trapping in local optima. We study methods to enhance the global serach ability of an algorithm from the algorithm searching mechanism and spatial structure, and at the same time consider its lobal search ability.Former intelligent optimization algorithms reproduce good solutions and reserve individuals according to their qualification in the search process, which results in effective component failures and the reduction of population diversity. Thus the algorithms loose the ability to explore new space and are easy to trap in local optima. In the latter iterations, low local search ability causes the excessively slow convergence. Weighting space exploration and exploitation capacity of an algorithm, that is its global and local search ability, and controlling the searching effectively across the space in the running process, can only get found performance and enhance the convergence speed and solution quality.1. According the problem of diversity reduction that resulting in loss of ability to explore space, accepting inferior solutions in a reasonable manner in the optimizing process may increase the diversity of populations, and make the algorithms avoid trapping into the Local optima and improve their performance.Mutation is the foundation producing the diversity of populations, and crossing (including information exchange or transfer) acts as an accelerator. Therefore we take use of Evolutionary Programming (EP) only with mutation operation, and introduce the selection mechanism of annealing probability into it. In each iteration, inferior solutions are accepted by an annealing probability. It is proved that the new algorithm has global convergency, and it can avoid trapping into local optima effectively and achieve the optimal solution faster. By estimating the speed that the population contains optimal solution with probability 1 and the longest time that it gets convergence, we found that annealing factor will affect the convergence speed, and it may be improved by selecting a suitable annealing factor.2. Creating new solutions, based on the information available in the search process, may extend the search scope at some expense of time performance. Here we take use of solution backbones to creat new solutions in iterations, to improve the global search ability and solution quality.(1) First, we change the fitness definition of variables using extremal operation, to make it consistant with objective function. Simulations prove that the new definition can enhance search ability of individual solutions. Furthermore, we discuss the influence of parameter assignation. Then combined with the concept of solution backboens of combinatorial optimization problems, we design a bacbkone-based extremal evolutionary algorithm. The use of backbone information strengthens its local search ability. At last, we give the convergence proof of this algorithm.(2) Under the definition of the fitness by object function, we analyze the role of new solutions produced in the iterative process. On the basis of individual performance comparison and the model structure of topology evolving networks describing evolution, we designed Network Topology Evolving Optimization (NTEO), which has variable population scale. By comparision, this new algorithm extends search scope obviously, and precisely for this reason its convergence speed is slow. But on the consition that computer technology develops rapidly, time expense will not limit its applications. Moreover, NTEO does not fix the mutation mechanism of solutions, so it may be combined with others various mutation technologies. And coding continues variables into binary and denoting the same parts of solutions using binary bits with the same values, we may optimize continues problems by this algorithm.Large search space of an optimization problem and the contradiction between search randomness and searching speed, also greatly affect the performance of an algorithm. Most intelligent algorithms carry out global searching in the entire search space. When the problem scale becomes large, the search space inflates rapidly. Former intelligent optimization algorithms are not clear the spatial structure in the conduct of optimization, so their searching has great blindness and the searching time often increases with exponential rate. Deterministic models studied at present usually have fixed spatial structure. However, search algorithms have no knowledge of the relationship between the structure and the optimal solutions. Therefore, in order to improve their performance, in addition to more rational design of the search mechanism, the research of search space is also of important significance.1. Intelligent optimization algorithms find optimal solutions of a problem through searching the whole space, so the space size and spatial structure have a profound impact on their performance. Knowledge of the search space structure will help us understand the difficulty to solve a problem, and will be helpful to design appropriate strategies to enhance the solving efficiency.(1) Traditional intelligent algorithms like Genetic Algorithm do not use knowledge of search space, which greatly affects performance of the algorithms. While the searched solutions reflect information of search space, so the algorithm can use this information to guide searching and realize self-adaptive searching. Based on this idea, we design an evolutionary programming algorithm with knowledge guiding. Through simulation comparison, the new algorithm has faster convergence rate. For large-scale optimization problems, the solution accuracy percentage can be controlled within 3%.(2) Although intelligent optimization algorithms have a common characteristics that with small dependence on problems, but they all have randomness and search optima in the whole space. Simplizing search space may improve convergence speed and solution quality. Former space contraction and partition methods lack pertinence to discrete optimization problems, and the number of sub-spaces needs to be assigned by users. Here we contract the search space of a combinatorial optimization problem by no-entirely evolving of intelligent optimization algorithms, and the space is divided into one or several optimal domains by solution backbones, then optimization is cariied out in local domains. Proved by simulations with Job-shop Scheduling Problems, this method can get several high quality solutions quickly.2. With regard to a specific optimization problem, its spatial structure or some nature is known. If it is fully utilized, we will be able to improve the efficiency of algorithms, thereby reducing the uncertainty of search, and improve solution quality.The space of Mixed Integer Programming (MIP) model is be divided into a number of discrete sub-spaces by integer variables. Restricting the search in feasible sub-spaces can greatly reduce the search workload using search algorithms. In the MIP model for production scheduling, there must exist the competition constrains among equipment to process tasks. We integrate the nature into the algorithm, and can achieve further space contraction, thus may limit the search in feasible sub-spaces. By analyzing the relationship between 0-1 variables, and that between them and the real variables in the 0-1 MIP model of production scheduling for batch process, we design a decomposition algorithm based on Grouping Genetic Algorithm (GGA), and analyze the scale of a problem that this decomposition algorithm fits. The results show that, when the complexity function, O(f(·)), of an algorithm used grows exceeding the growth rate of 2~n, and the number of continuous variables m, 0-1 variable number n, are greater than a threshold value ζ, the decomposition algorithm can greatly reduce the computation work.In short, after analyzing the search mechanism of intelligent algorithms, and serach space, some systematic work is done on methods to improve solving performance from these two aspects. At one aspect, accepting bad solutions or creating new solutions by some reasonable manner, which extends search scope, can enhance the global search ability of an algorithm and makes it get high quality solutions. At another aspect, using information of search space to guide searching or estimate spatial structure enhances the solving effeciency and soution quality. Simlify search space by known spatial structure and characters of problems may help us design more suitable strategy to decrease computational complexity. The optimization strategy of combining intelligent algorithms with spatial structure is a potential research direction, which is attracting more and more optimization researchers.
Keywords/Search Tags:Intelligent optimization algorithm, Search space structure, Population diversity, Space contraction and partition, Computational complexity
PDF Full Text Request
Related items