Font Size: a A A

Exact Penalty Function For Solving Nonlinear Constrained Optimization Problems

Posted on:2013-02-04Degree:MasterType:Thesis
Country:ChinaCandidate:R R LiFull Text:PDF
GTID:2210330374461352Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
The exact penalty function method is an important method for solving nonlinear constrained optimization problem. In theory, the exact penalty function method only need to solve the penalty problem when the parameter take a finite value, then we can get the solution of the constrained optimization problem, thus avoid the shortcomings of morbid when the value of the penalty parameter tends to infinity. Exact penalty function is divided into nondifferentiable exact penalty function and continuously differentiable exact penalty function. Under normal circumstances, the simple exact penalty function must be nondifferentiable, which will prevent the local fast convergence of some fast algorithm, and result in the "Maratos effect". Continuously differentiable exact penalty function overcomes the above shortcomings, it has better properties. Augmented Lagrangian function is a special kind of continuously differentiable exact penalty function.First, for a general nonlinear constrained optimization model, this paper will propose a new nonlinear Lagrangian function to discuss the properties of the function at the KKT point, and prove that under wild conditions, the iterated points based on the dual algorithm of the function are local convergent, and then give the error estimates of the solution about the penalty parameter. This provides a new way for solving nonlinear constrained optimization problem.Then we make a second order differentiable smooth approximation for the nonsmooth penalty function of the above model, and give the error estimates of the optimal values of the original optimization problem, the corresponding nonsmooth penalty function and the smooth penalty function, and design the algorithm based on the smooth penalty function, then prove that under wild conditions, it is globally convergent, and finally numerical experiments are given to illustrate the effectiveness of the algorithm.Finally, for the cone optimization problem, we use the augmented Lagrangian function, which is a special exact penalty function, give an iterative algorithm, and prove that the algorithm has a weak global convergence, that is, propose a ε-global optimal solution, then for each iterationk, we get the corresponding εkglobal optimal solution,which converge to the ε-global optimal solution,thus the ε-global convergence of the algorithm is proved.
Keywords/Search Tags:Nonlinear constrained optimization problem, exact penalty function, smooth approximation, augmented Lagrangian function
PDF Full Text Request
Related items