Font Size: a A A

An Improved Particle Swarm Optimization For Large-scale Optimization

Posted on:2021-04-17Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhangFull Text:PDF
GTID:2518306554466014Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In real life,most practical optimization problems need to deal with a large number of decision variables,which is called a large scale optimization problem.Although traditional computational intelligence methods are effective in solving some low-dimensional optimization problems,with the increase of decision variables,the performance of these methods in solving large scale optimization problems is degraded,and it is difficult to find a global optimal solution.Therefore,it is more and more necessary to improve the computational intelligence method reasonably and effectively to solve large scale optimization problems.Particle swarm optimization(PSO)is a branch of computational intelligence method.Because of its simple principle and easy implementation,it has attracted the attention of many scholars and is widely used in solving optimization problems and practical engineering problems.However,because the particle swarm optimization algorithm has the disadvantages of being easy to fall into local optimization and slow convergence speed,it is difficult to solve large scale optimization problems.Therefore,it is very necessary to effectively improve it to avoid the above phenomenon.This paper improves the group structure and particle update learning in the particle swarm optimization algorithm,improves the diversity of the group,and makes full use of the particle swarm information.The main work as follows:(1)A hierarchical sorting swarm optimization algorithm for large scale optimization(HSSO)is proposed.In this algorithm,the initial particles are sorted according to their fitness value,and then the sorted particles are divided into two groups,one group is the good particle swarm with good fitness value,and the other group is the bad particle swarm with poor fitness value,namely the advantage group and the disadvantage group.Particles in the inferior group learn by updating from the superior group.We then reorder and relearn the dominant group as a new group.The above operations are repeated several times until there is only one particle left,and each time the inferior group of particles updated forms a separate layer,eventually forming a complete hierarchy.In the experiment,we applied HSSO to the optimization of 39 benchmark functions and compared it with several existing algorithms.The results showed that although HSSO was simple,it had a great improvement in exploration and development.(2)A spread-based elite opposite swarm optimizer for large scale optimization(SEOSO)is proposed.In this algorithm,spread learning and elite opposite learning are introduced.In spread learning,particle swarm is divided into several layers of subgroups,which can exchange particles to obtain more useful information,thus improving the diversity of the group.In elite opposite learning,the dominant particle is also updated and information about the opposite direction of its flight is processed.Finally,experiments were performed on the benchmark test function set.The experimental results show that this method is superior to several advanced algorithms in comparison on benchmark test functions and can effectively solve large scale optimization problems.
Keywords/Search Tags:Large scale optimization, particle swarm optimization, hierarchical sorting learning, spread learning, elite opposite learning
PDF Full Text Request
Related items