Font Size: a A A

A sequential design for approximating the Pareto front using the expected Pareto improvement function

Posted on:2010-04-02Degree:Ph.DType:Thesis
University:The Ohio State UniversityCandidate:Bautista, Dianne CarrolFull Text:PDF
GTID:2448390002970450Subject:Statistics
Abstract/Summary:
The aim of this thesis is to compute efficiently and identify a set of good solutions that collectively provide an even coverage of the Pareto front, the set of optimal solutions for a given MOP. The members of the Pareto front comprise the set of compromise solutions from which a decision maker chooses a final design that resonates best with his or her preferences.;To reduce the computational overhead, we adopt a surrogate-guided optimization approach. The idea is to build fast approximations that can replace the long-running simulator during optimization while also being reasonably accurate at predicting the latter in the unevaluated feasible design points. This brings about a tremendous gain in efficiency at the price of extra uncertainty due to the speculative nature of the search for optimal points. Consequently, two competing issues need to be balanced: the global exploratory search for improving surrogate accuracy and local exploitative search for converging rapidly to the optimal points. In a fully sequential optimization design, a key ingredient for achieving this balance is the criterion for selecting the next design point for costly-function evaluation.;Among the various surrogates considered so far, none has demonstrated a mechanism for balancing the tension between local exploitation and global exploration as automatically and as naturally as Gaussian processes have done, as illustrated by the Efficient Global Optimization algorithm for single-objective optimization. We therefore attempt to extend the EI framework to solve the black box MOP.;The existing literature on Gaussian process-guided sequential designs for the MOP is scarce on multivariate emulators that effectively incorporate dependencies in the objective function vector. It is also scant on improvement criteria suitably defined for the MOP, that can decisively localize solutions in the vicinity of the Pareto front.;Our proposed EmaXalgorithm addresses this lack. We implement a multivariate Gaussian process emulator that guides the sequential search for optimal solutions by means of the expected Pareto improvement function. We considered two models of the covariance structure: a non-separable independence model and a separable dependence model which exemplifies a way of accounting for the covariances within the objective vector.;At each stage, the "current best" solutions are first identified. These solutions dominate other feasible solutions in the current experimental design, but do not dominate each other. Then a constrained non-linear program is solved to locate the design point that presents the greatest potential Pareto improvement to the current non-dominated front.;Based on the maximin fitness function, the Pareto improvement is essentially a free upgrade offered by a prospective design point to at least one of the currently identified best designs, in at least one of the objectives. It bears an analogous interpretation to its usage in economics as a change or action in economic management which upgrades the condition of one or more members without worsening the circumstances of the other members. The idea is to progressively add increments of improvements until ideally, a state of Pareto equilibrium is reached where no more free upgrades are possible. At that point, trading-off in the performance criteria happens when moving from one Pareto solution to another.;We demonstrated the viability of the EmaX algorithm on five MOP's with relatively low dimensionality and offering various degrees of difficulty in terms of the shape of the Pareto front. Three sequential algorithms were compared: the IGP-PI, IGP-EmaX, and CoH-EmaX. The IGP procedures use a surrogate for the outputs based on the independence model while CoH-EmaX is based on a dependence model. The EmaXcriterion was contrasted with a contending improvement criterion called the probability of improvement or PI.;On the five MOP's tested, the EmaXcriterion generally performed better than the PI in terms of efficiently and evenly covering the Pareto front. The solutions obtained by the EmaX algorithm were generally more spread out along the Pareto front than the solutions obtained using the PI-directed sequential design which were clustered or biased in some regions of the Pareto front, even as the latter algorithm delivered bigger solution sets. (Abstract shortened by UMI.)...
Keywords/Search Tags:Pareto front, Solutions, Sequential, Function, Algorithm, MOP
Related items