Font Size: a A A

Convex Approximation Approach To DC Optimization And Its Applications

Posted on:2013-02-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:S W ZhangFull Text:PDF
GTID:1110330371996657Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
This dissertation focuses on the study of constraint qualifications for Lipschitz optimization problems, sequential convex approximation methods and proximal point methods for optimization problems of DC functions, and as an application the sequential convex approximation approach for joint chance constrained optimization problems. The main results, obtained in this dissertation, may be summarized as follows:Chapter2introduces three constraint qualifications weaker than the generalized Robinson constraint qualification (GRCQ) proposed by [1] for constrained Lipschitz optimization. The three constraint qualifications are the weak generalized constraint qualification, the bounded constraint qualification and the generalized Abadie constraint qualification. The relationships among these three constraint qualifications are studied, showing that the first two constraint qualifications are verifiable sufficient conditions for the calmness of the solution mapping, whereas the generalized Abadie constraint qualification is weaker than the calmness. The three constraint qualifications are applied to mathematical programs with complementarity constraints (MPCC) yielding new constraint qualifications for C-stationary points of MPCC.Chapter3considers the DC optimization problems with DC objective and DC constraint functions. The sequential approximation methods are constructed and their convergence properties are studied for both nonsmooth DC and smooth DC optimization problems. The set of stationary point conditions for the nonsmooth DC optimization problem is reformulated as a generalized equation of a monotone set-valued mapping. The sequential convex approximation method generates a sequence of points whose objective values are decreasing. Based on the continuous continuity of feasible regions of convex optimization problems and the asymptotical constraint qualification in Klatte and Li (1999)[2], we prove that any accumulation point of the sequence generated by the sequential convex approximation approach is a stationary point of the nonsmooth DC optimization problem. Similarly, for smooth DC optimization, under the generalized Slater condition, we demonstrate the continuous continuity of the feasible region for convex optimization problems, and that any accumulation point of the sequence generated by the sequential convex approximation approach is a stationary point of the smooth DC optimization problem.Chapter4considers a nonsmooth optimization problem whose objective is the sum of a smooth function and a DC function, and inequality constraints are DC function constraints. Search directions are determined by solving convex optimization problems whose objectives are obtained by using a strictly convex quadratic function (or a proximal term) and linear functions to approximate the smooth function and the second convex functions, respectively. If the search direction is the exact solution to the convex problem and the stepsize is generated by Armijo rule, we obtain the proximal subgradient method. Otherwise, if the search direction is an inexactly solution, then we obtain the inexact proximal point method. For both methods, we demonstrate that any accumulation point of the sequence generated by the method is a stationary point of the DC optimization problem.Chapter5deals with a nonsmooth optimization problem whose objective function is a nonsmooth DC function and the constraint set is defined by a joint chance constraint. We adopt the similar method as in Hong, Yang and Zhang (2011)[3] to deal with the joint chance constraint, in which a DC function is constructed to approximate the probabilistic function so that a parameter ε>0depended problem (Pε) is obtained to approximate the original chance constrained problem. It is demonstrated that, under mild conditions, the convergence of the global solutions and the stationary points of (Pε) to the global solution set and the set of stationary points of the original problem, respectively when ε→0. For a specific problem (Pε), the sequential convex approximation approach in Chapter3is employed and the convergence theorem is proved. As the convex problems in the sequential convex approach are defined by expectation functions, we use the Monte Carlo method to solve them. The sequential convex approximation Monte Carlo approach is used to solve stochastic L1-norm minimizing problem and the nonconvex stochastic quadratic programming problem, and the numerical results are reported. Different from the work by Hong, Yang and Zhang (2011)[3], the objective function in our problem is a DC function and the assumptions about the differentiability of the maximum random function c(x,ξ)=max{c1(x,ξ),…cp(x,ξ)} and the continuous differentiability of F{t, x) the accumulation function of c(x,ξ), are not required in our analysis.
Keywords/Search Tags:Lipschitz optimization, DC optimization, joint chanceconstrained optimization, convex approximation method, proximal pointmethod, Monte Carlo method
PDF Full Text Request
Related items