Nonlinear constrained optimization problems are the most common form in nonlinear optimization problems as well as the most challenged subjects in optimization research. Therefore, this thesis does a research about some algorithms in nonlinear constrained optimization problems. Commonly, the main research methods are Feasible Direction method, Lagrange-Newton method, Penalty Functions method, Sequential Quadratic Programming, Sequential Systems of Linear Equations, Trust Region Algorithm and so on; among those algorithms, Lagrange-Newton method and Sequential Quadratic Programming are the more important research methods though they are seldom used in general constrained optimization problem.If we combines active-set strategy with Feasible Direction method in the dealing of inequality constrained optimization problem, it would make the whole algorithm's structure more complex.If we use gradual exact search method in the linear search, it would make the operation scale more larger. Thus, this thesis makes a research about Lagrange-Newton method in general constrained optimization problem and uses Armijo linear search to gain an improvement of Lagrange-Quasi-Newton method. Meanwhile, a hybrid algorithm is given with the help of Lagrange-Newton's conclusion and generalized projection technique.This thesis consists of five chapters, which are listed as follows.In chapter 1, we introduce the development of nonlinear constrained programming and depict the inner connection of all kinds of algorithm for nonlinear constrained optimization problems, and then introduce the conditions for optimality of general constrained optimization problem. Next, we summary the convergence and superlinear convergence conditions of all kinds of nonlinear constrained optimization problems. Moreover, we indicate the research background, status quo and our fruits.In chapter 2, we expand the Lagrange-Newton method which is used in equality constrained problem to general constrained problem condition and then translate the conditions for optimality into linear equations to solve the general constrained optimization problem.Under some suitable assumptions, we prove the algorithm is convergent and has superlinear convergence rate.The numberical results show the algorithm is effective. In chapter 3, an improvement Lagrange-Quasi-Newton method are presented for general constrained optimization problem.This method adopts the Armijo linear search and utilizes modified BFGS of Quasi-Newton, which ensures the positive definite property.Under some suitable assumptions, we prove the algorithm is convergent and has superlinear convergence rate.Through numerical experiment results of the algorithm and the comparison with others, it indicates that this method has fast convergence rate. In chapter 4, a new hybrid algorithm is presented to solve inequality constrained optimization problem.This algorithm adopts the generalized projection technique and Armijo linear search, per single iteration, it is only necessary to slove one system of linear equations.Thus, the computational cost is reduced largely.Under some weaken assumptions, we prove the algorithm is convergent.The numerical experiment results indicate that this method is effective.In chapter 5, we summit our conclusion of this thesis and propose the research direction of filtrate method instead of penalty functions. |