Font Size: a A A

Research On Particle Swarm Optimization For Dynamic Optimization Problems

Posted on:2013-12-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:1228330452460096Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Many algorithms in the field of evolutionary computation have received a lot results,however, the real world is dynamic and the optimization problems in the real world isalways dynamic. For example, for job scheduling problems, new jobs arrive and need jointhe current scheduling. For portfolio problems, each stock’s price varies with time in thestock market. For multi-sense management problems, senses may be broken and theperformance of senses may vary. Recently the researches on optimization algorithmsaimed at dealing with dynamic optimization problems (DOPs) have drawn growinginterest of the researchers and has been one of hot spots. Particle Swarm Optimization(PSO) has been selected among many optimization algorithms to be used in the researchon dynamic optimization problems due to its characters with simplicity, efficence andinformation sharing. However, traditional PSO is faced with two challenges: outdatedmemory and diversity loss, especially the diversity loss is more serious and it makes PSOcan’t track the changed optima. So the traditional PSO must be modified in order to dealwith DOPs better. This paper proposes many multiswarm PSOs dealing with DOPs fromdifferent views of search space, evolutionary time and collecting useful information fromeach dimension of particle position in the whole swarm. The main contents of the paperare as follows:1. The paper starts from the definition of DOPs and then describes the classification ofdynamic environments, dynamic optimization benchmarks, performance measures fordynamic optimization algorithms, the detection of environment change, the goal ofdynamic optimization and the relation of dynamic problems, optimization algorithms andperformance measures.2. Research methods for DOPs using evolutionary algorithm (EA) are reviewed andthe current research status of PSO for DOPs are reviewed too in the paper.Population-based dynamic optimization algorithms mainly use the methods of enhancingdiversity, keeping diversity, using memory, multipopulation (multiswarm) and prediction.Strengths and weaknesses of the methods are described and the multiswarm method ispointed out most promising for DOPs.3. The idea of good diversity is proposed in the paper. It is pointed out that the key toaddress DOPs for PSO is to keep good diversity and have the ability of exploiting theareas the global optima lie in.4. Aiming at the bottleneck of steep peak difficult to be found for method of multiswarm, the paper introduces the idea of sequential niche technique to traditionalmultiswarm method from the view of exproring the search space steep peak exsits in andproposes reverse space search multiswarm PSO for DOPs (RSPSO). RSPSO uses theinformation of the peaks found by coarse search of traditional multiswarm method usingparallel niche to modify the fitness function and then generates a new subpop-reversesearch subpop. The particles in the subpop are evaluated by the modified fitness function.The search space explored by original subpop is called forward search space and thatexplored by reverse search subpop is called reverse search space. Two kinds of subpopevolve in cooperation. Reverse search subpop tends to find much steeper peak and somore promising space where peaks are is explored. Elaborated experiments on MPB showthe ability of finding peaks is enhanced, algorithm performance is improved greatly andalgorithm has better robustness to adapt to dynamic problems with more different changeseverity and number of peaks.5. The way of memoring the optima found by multiswarm method now is short-termmemory and the defect of the way is due to missing the useful information got in theprevious evolution process. Useful information can’t be made full use of, which hindersthe improvement of algorithm performance. The paper makes full use of all the usefulinformation got during the evolution from the view of evolution time and proposes generalframe of multi-thread-memory multiswarm PSO for DOPs. The frame extermally usesstorage space to memory all the useful information got during all the previous evolution.The information includes the best particles found nearby the peaks. Once environmentchanges, the corresponding information of peaks in memory are replenished and updatedaccording to the good information for peaks got in last environment and then introducesbetter particles in the memory to the current pop (swarm). The memory is of multiplethreads and uninterrupted. The paper proposes two algorithms named multi-threadmemory multiswarm PSO (MTM-PSO) and multi-thread memory and reverse-spaceresearch multiswarm PSO (MTM-RSPSO). The two algorithms are tested on MPB and theexperimental results demonstrate that algorithm performances are improved a lot due tousing multi-thread memory.6. The clustering method based on fractional global best formation (FGBF) techniqueis proposed. It uses the information of optimization function and caculates dimensionfitness for each dimension of particles in swarms according to each peak respectively. Anartificial global best (aGB) particle is created by collectting the best dimensionalcomponents with maximum dimension fitness in each dimension of all the particles in the swarm. How many peaks, how many aGBs are accodingly. Based on each aGB, a certainnumber of particles with the shortest Euclidian distance from the aGB are selected and acluster is formed by these particles and the aGB itself. Then a new algorithm named FGBFclustering multiswarm PSO for DOPs (FGBFPSO) is proposed. FGBFPSO furtherpromotes the development of PSO for DOPs from the view of collecting usefulinformation from each dimension of the particles in all the swarm. Experiments on MPBverficate FGBFPSO is superior to all the other algorithms.
Keywords/Search Tags:Particle Swarm Optimization, Dynamic Optimization, Multiswarm Method, Sequential Niche Technique, Multi-Thread Memory, Fractional Global BestFormation
PDF Full Text Request
Related items