Font Size: a A A

Improved EGO Solving Relatively High Dimension Optimization And Its Application

Posted on:2017-01-04Degree:MasterType:Thesis
Country:ChinaCandidate:L QiFull Text:PDF
GTID:2180330485486002Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Global optimization is worth researching in many fields of our daily lives. For example, the research for the design of airplane wings in fluid mechanics, texture synthesis in image processing etc. all are related to global optimization. These problems can be boiled down to solve the problem which can be abstract as the problem of finding the global optimal solution in essence. As a result, more and more attention has gotten in global optimization by researchers and becoming the research hotspot. However, the global optimization research in this thesis is different from the normal global optimization, which has three characters. The first one is that the objective function is computationally expensive, so it can’t do many evaluations. The second one is that the optimization problem is black-box, and there is no derivative information etc. The third one is that the optimization problem is nonlinearity of high degree.Efficient Global Optimization(for short EGO) algorithm is a kind of classic global optimization algorithm. Its performance is quite good when doing some problems. These problems’ functions have no analytic expression or complex analytic expression, namely black-box problems. In addition, it does well in computationally expensive problem, which leads to its wide use. EGO algorithm is based on Kriging model.The insufficient of EGO algorithm based on Kriging function is as follows: traditional EGO algorithm based on Kriging often gets a local optimization solution, other than a global optimization solution. Besides, with the increasing of iteration, the speed of the Kriging function convergence becomes slower, especially when the dimension is high. For these deficiencies, this thesis plans to improve on two aspects: Giving weights to the component parts of the EI(Expected Improvement) function and adjusting these weights. We adopt the Random Candidate Point Sampling in iteration. By the two methods, we can not only obtain a better weight ratio, but also balance the local search and global search, because the Random Candidate Point Sampling is greedier, then we can obtain a better solution. In addition, its performance is better than traditional methods in relatively high dimension problem. The main work of this paper is as follows:Taking advantage of Latin Hypercube Sampling(for short LHS), we get the initial sample set, build the Kriging model and parameter estimation by DACE(Design and Analysis of Computer Experiment) method, and do cross validation. If it doesn’t pass, we change the objective function. Taking the EI function as the standard of whether iterates, if the function value is small than a threshold, the iteration stops. We give weights to the components parts of the EI function and adjust these weights. Moreover, we adopt the Random Candidate Point method in later iteration process, after getting initial sample set by Latin Hypercube Sampling.The thesis use four test functions: Keane function, Levy function, Michalewicz function, Rastrigin function in different dimensions, and the thesis adopts example of application to test our new EGO algorithm. The thesis takes advantage of these functions to test our use EGO algorithm respectively. Results show that the performance of our improved EGO algorithm is quite good.
Keywords/Search Tags:EGO Algorithm, Kriging Function, Relatively High Optimization Problem, EI Function, Random Candidate Point Sampling
PDF Full Text Request
Related items