Font Size: a A A

Research On Evolutionary Computing For Optimization Problem

Posted on:2006-12-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q S DouFull Text:PDF
GTID:1118360155453688Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Optimization is an important branch of mathematics, which is applied widely. Atpresent the theoretical research on kinds of optimization problems is progressingrapidly along with the fast advance of computer technology and optimizationcomputing methods. More and more new methods appear and put into actualapplication.The traditional mathematical methods, which are still progressing currently,have been used to solve the optimization problems for long history. But those methodsalways aim at some specific problems. Some motheds need relatively strict conditionsfor searching space, or high order derivative of the optimized functions. In comparisonwith the traditional mathematical methods, the evolutionary computing method hassuch advantages as: 1. it is based on group operation, so the computing results don'tdepend on the initial value basically; 2. it doesn't have special request for searchingspace and the optimized functions; 3. it is easy to compute.For the non-constrained optimization problems , the classic genetic algorithm isdescribed in this paper; in addition, several new evolutionary methods are introduced,such as Evolutionary Programming (EP), Particle Swarm Optimization (PSO),Differential Evolutionary (DE), etc.The Classical Evolutionary Programming (CEP) is analyzed in this paper. Theeffect of mutation vector is pointed out; the different effects of Gauss distribute andCauchy distribute on CEP are analyzed, and probability of individual which updatedwith Gauss distribute successfully obtained global optimum is estimated. PopulationHeuristic Evolutionary Programming (PHEP) was proposed in this paper. Fourparameters, Odirect_ sel,Onormal ,Omut ,Osel _ mut, are used to control the distributionof individuals in population. The mutation step size is adjusted according to thedistribution of the individuals, it is increased or decreased at proper occasions tomaintain the diversity of the population and avoid the blindness on adjusting the stepsize. Due to this improvement the group searching pressure and selecting pressure isreleased, and the performance efficiency of the algorithm and searching precision isimproved.The theorems and the related prove for the relationship between parameterssetting and particles convergence are presented for the PSO method. It is pointed outthat there are reasonable parameters w ,φ1 and φ2 with which the particle willconverge onto a line segment whose endpoints are pi and pg respectively incertain assumed conditions. Although there is certain difference between assumedconditions and actual situations, it is rather significant reference for improving the PSO.Based on the analysis two improvement methods which are Particle SwarmOptimization with Simulated Annealing (PSOwSA) and Particle Swarm Optimizationwith Division of Work (PSOwDOW) are put forward. In PSOwSA the idea ofsimulated annealing is introduced into PSO and the temperature T is used to control theprobability of particle updating. The particle can't easily escape out of the hopefulsearching area when the temperature T decrease at an enough slowly. This strategyimproves the particle's searching ability in local areas. The strategy of work-dividingis adopted to overcome the disadvantages of the standard PSO In PSOwDOW. Thewhole swarm is divided into three subgroups: POP _ Core , P _ Near andP _ Far . POP _Core adopts the ideas of EP and seeks new swarm best near theoptimum. it is ensured to perform sufficient searching near the optimum by this way.Because the particle converges in P _ Near , so the particle's searching areas areconsidered as the most hopeful to find new individual best and swarm best; themovement track of the particles diverges in P _ Far and they act as a role forexploring new searching areas during the operation of the algorithm. These particlescan keep the diversity of the population and avoid the "premature"phenomenon totaking place. The method of Differential Evolution (DE) is put forward recently, theexperimental results shows that it's very effective for optimization problems. DEconverges very fast in actual computing, there are little control parameters in the DE,which are not so sensitive as PSO. The reasonable range for the parameters ispresented in this paper. In comparison with the above methods in different conditions, PHEP, PSOwSA,PSOwDOW and DE are better than CEP and PSO on the whole in normal optimizationconditions; the optimization problems, especially the multi-peak problems, becomemore complex in high-dimension conditions. The experimental results show that PHEP,PSOwDOW and DE have good properties and obtained satisfied results inhigh-dimension environment. PSOwSA is better than PSO in the condition whentemperature decrease extremely slow, but the spending on computing is too large to beaccepted. In dynamic optimization environments only PSOwDOW can effectivelytrack the changes of the dynamic optimum when the step size and frequency ofoptimum movement is rather big. PHEP and DE can adapt to this kind of dynamicoptimization only in the condition that the noise is not too big. On the contrary themethods such as CEP, PSO and PSOwSA are unsuccessful in most cases accept for thecase that the impulsive frequency and the impulsive steplength are both very small.PSOwSA is even worse than the standard PSO in dynamic cases. From the abovecomparison such a conclusion can be drawn that any of the methods can't be affirmedgood or bad in general meaning and they have their own characteristics, which arerespectively suitable for different actual problems. This paper presents several evolution strategies for constrained optimizationproblems. Several methods for treating feasible-individuals are described in detail,which include: 1. Keeping the feasible individuals in the population; 2. Using specialoperators or coding methods; 3. Adopting the principle that feasible-individuals arepreferential (namely, any one feasible-individual is considered to be better than all theunfeasible-individuals); 4. Employing a given linear order to monitor the restrain andremedy or punish the unfeasible-individuals, etc. The constrained optimization problems are changed into non-constrainedoptimization problems through punishing the unfeasible individuals, which make itpossible to apply the methods for non-contrained optimization problems to solve it.However, it is not easy to design the punishing functions, which usually have closerelationships with the problems. On the one hand, it will be very difficult to find globaloptimum if the punishment is too strict; on the other hand, if the punishment is notenough strict the aim of punishment can't be achieved. This paper presents the recentpunishment strategies wildly used in the evolutionary computing field, which werebrought forward by Homaifar, Joines and Houck, Schoenauer and Xanthakis, and sono. Culture algorithms was introduced in this paper, this method roughly simulatesthe evolutionary process of human society. Here culture is regarded as the carrier ofinformation, which can be wholly accessed by all the members of society and be usedto guide their actions. There are two evolution spaces in the culture algorithms, one is the Belief Spaceconsisting of the experiments and knowledge obtained during evolution process, whensolving the constrained optimization problems, the contrains of the problems areexpressed as knowledge in the Belief Space and be stored to guide the individuals inPopulation Space to find global optimum. The two spaces exchange information basedon specific protocols. We can combine the culture algorithms with all methods whichwere commented above. So there are many choices to use culture algorithms to treat...
Keywords/Search Tags:Optimization Problems, Evolutionary Algorithms, Particle Swarm Optimization(PSO), Culture Algorithms(CA), Differential Evolution(DE), Evolutionary Programming(EP), Evolutionary Strategies(ES)
PDF Full Text Request
Related items