Font Size: a A A

The Study On Large-scale Optimization Problems Based On The Particle Swarm Optimization

Posted on:2021-01-14Degree:MasterType:Thesis
Country:ChinaCandidate:X H BaiFull Text:PDF
GTID:2428330611457417Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Particle swarm optimization,as one of the important intelligent optimization algorithms,has been applied to solving various optimization problems and achieved good results.However,when solving large-scale optimization problems,there are still problems such as low optimization efficiency,slow convergence speed,and lack of population diversity.Based on this,this paper improves the particle swarm algorithm to improve the optimization performance of the particle swarm algorithm.The main work is as follows:1.In order to improve the optimization performance of social learning particle swarm algorithm,the grouping strategy and opposition learning ideas are introduced into the algorithm,and then an improved particle swarm algorithm based on the grouping strategy is proposed,and used to solve large-scale optimization problems.First,the dimensions of the particles are divided into several groups through a grouping strategy.At the same time,the population is divided into dominant particle groups and non-dominant particle groups according to the fitness value of the particles,and the particles in the dominant particle group are set to directly enter the next generation.For each particle in the non-dominant particle group,the information corresponding to the dimensions in the same group is realized by learning the information of the corresponding dimension of the same demonstration particle,and the evolution of the particles is realized.Secondly,based on the idea of opposition learning,the opposition learning mechanism is implemented on a certain percentage of particles in the population,and the global optimization ability of the algorithm is improved by generating opposition solutions.Finally,the proposed algorithm is tested using the CEC2010 test function set.The simulation results are compared with the results of existing typical algorithms to verify the effectiveness of the proposed algorithm.2.Based on the analysis of individual performance,a hierarchical learning strategy and contribution value strategy are set,and then an improved particleswarm algorithm based on hierarchical learning is proposed and used to solve large-scale optimization problems.First,in order to give full play to the ability of particles in different states to exploit and explore space,a hierarchical learning strategy is set,that is,the population is layered according to the fitness value of the particles,and the particles of the first layer learn only from the particles of the current layer,and the particles of the other layers learn from the particles of the current layer and the previous layer.This update method of treating different states of particles in the population enhances the ability of the algorithm to explore and exploit.Second,by measuring the fluctuation of the optimal individual fitness value in different iteration periods,the contribution value strategy is set.And adjust the parameters in the population update formula based on the contribution value,and at the same time perform the deletion strategy on the particles in the population,thereby reducing waste of computing resources and improving the convergence efficiency of the algorithm.Finally,the proposed algorithm was tested using the CEC2010 test function set,and compared with 5 typical algorithms to verify the effectiveness of the proposed algorithm.3.Based on the hierarchical learning particle swarm optimization algorithm,the population update strategy is improved,and then a particle swarm optimization algorithm based on the improved population update strategy is proposed and used to solve large-scale optimization problems.First,the population is divided into multiple sub-populations during the evolution of the population.Secondly,the dominant particle bank is set,and different learning factors are set for each sub-population,which are respectively evolved,and then the updated sub-populations are merged into a new population to participate evolution,improve the global optimization ability of the algorithm.Finally,the proposed algorithm is tested using the CEC2010 test function set,and compared with 7 typical algorithms to verify the effectiveness of the proposed algorithm.
Keywords/Search Tags:Grouping, Opposition learning, Hierarchical learning, Contribution, Large-scale optimization problem
PDF Full Text Request
Related items