Font Size: a A A

Research On Large Scale Global Optimization Algorithms Based On Sampling

Posted on:2020-10-25Degree:MasterType:Thesis
Country:ChinaCandidate:M M WangFull Text:PDF
GTID:2428330575464631Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The way of solving large scale global optimization problems with high efficiency plays a crucial role in a wide range of fields.Especially when it comes to some practical ap-plications,such as the design of wing and the adjustment of urban circuit system.Some classical optimization algorithms have been unable to solve large scale global optimization problems effectively due to“the curse of dimensionality".Therefore,there is broad re-search and development space on large scale global optimization.Among the common large scale global optimization algorithms,whether the coopertive coevoluation algorithm based on subspace partition or the improved heuristic algorithm are all require many parameters to adjust the algorithm model,especially in solving subspace optimization problem,which also leads to further study on large-scale global optimization.In contrast to the global optimization algorithm based on subspace partition,this paper proposes Gibbs-Adaptive Particle Swarm optimization algorithm(Gibbs-APS)and Gibbs-Parallel Adaptive Particle Swarm optimization algorithm(Gibbs-PAPS).The algorithms combine the particle swarm optimization strategy with the non-equal probability individ-ual selection strategy to solve the complicated and volatile large scale space optimization problem.In the Gibbs-APS algorithms,the Gibbs sampling algorithm is highly integrated with the adaptive particle swarm optimization algorithm.In the continuous evolution of the particle swarm,based on the hypothesis distribution of the generated particle swarm,using Gibbs sampling algorithm to sample the particle swarm.This heuristic algorithm allows the particle swarm to avoid some invalid calculations in high-dimensional space and maintains the diversity of the population.In addition,based on Gibbs-APS,this paper also presents an improved algorithm Gibbs-PAPS.This algorithm fully considers the local convergence of adaptive particle swarm optimization algorithm on the original basis,adding random sam-pling and adaptive particle swarm optimization algorithm and making the particle swarms learn from each other,thus maintaining the balance between development and exploration in both time and space.In addition,this paper combines large-scale global optimization with the Monte Car-lo tree and proposes Upper Confidence Tree-Gibbs Adaptive Particle Swarm optimization algorithm(UCT-GAPS).The UCT algorithm constructs a Monte Carlo search tree which al-so can be called evolutionary search tree.This algorithm can automatically emerges an op-timal evolutionary route.Applying UCT algorithm to large-scale global optimization prob-lem can make full use of the powerful search ability of UCT algorithm,and overcome the evolutionary algorithms' defects of local convergence and prematurity.In the UCT-GAPS algorithm,we not only combine the Gibbs-APS algorithm,but also use the elite mean bias to measure the diversity of the particle swarm,and on this basis,calculate the node's return value.Finally,we prove the effectiveness of the algorithm through experiments.The significance of this paper is that three new heuristic algorithms are proposed from the perspective of sampling and tree search.This not only improves the search performance of large-scale global optimization,but also facilitates the in-depth study of large-scale global optimization algorithms,and has far-reaching significance on the practical application of large-scale global optimization problems.
Keywords/Search Tags:Large Scale Global Optimization, Gibbs Sampling, Monte Carlo Tree Search, Elite Mean Deviation
PDF Full Text Request
Related items