Font Size: a A A

The Optimization Problem Of Linear Constraints And The Derivative-free Cubic Regular Method Of Nonlinear Equations

Posted on:2018-11-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:X J HuangFull Text:PDF
GTID:1360330515476949Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
The dissertation studys the derivative-free cubic regularization methods for solving simple constrained optimization and nonlinear equations.There is one problem feature which frequently appears in engineering design,circuit design and molecular geometry and so on that is the unavailability of derivative information occurring in several forms and degrees.For a variety of reasons there are many instances where derivatives are unreliable or too expensive to use,even unavailable.Nevertheless,it still needs to carry out optimization under the circumstances.Consequently,a class of nonlinear optimization techniques called derivative-free optimization methods(short for DFO)can efficiently solve the problems.The main idea of the DFO methods is to construct a suitable model locally for the function whose derivatives are not available,and then solve the subproblem based on the model using the well developed existing gradient-based methods.Other widely used DFO methods include patten-search methods,conjugate-direction methods and other method-s based on interpolation(approximation).DFO methods are not so well developed as gradient-based methods due to the inevitable function evaluation complexity and small problems,such as unconstrained optimization or bound constrained optimization,can be handled effectively only.The other problems are still the subject of investigation.Line search methods and trust-region methods are two widely-used convergence schemes for unconstrained optimization and are often used to globalize Newton-like iterations,the work presented here studys a fairly new alternative:cubic regularization methods.There is a dynamic positive parameter ?k for a general cubic regularization method proposed by Cartis et al[13]which might be viewed as the reciprocal of the trust-region radius(see the proof of Theorem 3.1 of[13]).Thus the ways of updating ?k in each iteration mimic those of changing the trust-region radius.Since the use of trust-region models is widespread in optimization,it is worth investigating where cubic models might be employed in their place.There are mainly six parts to state in this dissertation.We first review some basic knowledge in optimization and introduce the adaptive cubic regularization methods and the derivative-free optimization.The second part is to propose a derivative-free cubic regularization method for solving nonlinear equations.A system of nonlinear equations can be formed mathematically as a vector function equals to a corresponding zero vector.Based on the particular structure of nonlinear equations,we can approximately solve a problem by minimizing a sum of squares of the equations,such as least-squares problems,instead of a system of nonlinear equations immediately.Since the component functions of the vector function are not available with respect to the derivatives,we use the method that constructs locally a linear or quadratic model of the component function and form a polynomial objective function of the vector function to define the next iterate by seeking to minimize the polynomial approximately.By employing the general cubic regularization method,we construct a derivative-free cubic regularization algorithm for a system of non-linear equations,and prove its global convergence and fast convergence rate under suitable assumption.The numerical results show that the method we proposed is effective and feasible.The third part studys bound constrained optimization.Coleman et al[19]proposed a trust-region suhproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint generating strictly feasible iterates for solving the bound constrained optimization.However,the search direction generated in the trust-region subproblem must be satisfied with strict interior feasibility which results in computational difficulties in the forms of solving the subproblem many times before obtaining an acceptable step.We are motivated by the technique to defined an affine scaling cubic regularization model by using the affine scaling matrix proposed by[19]and employ a line search technique to obtain an acceptable trial step avoiding to solve subproblem repeatedly inspired by the combination with respect to Nocedal and Yuan in[72].The objective function can be replaced by a polynomial approximate model determined by sample function values.With this techniques we propose an interior affine scaling derivative-free cubic regularization method for bound constrained optimization,show the global convergence and local superlinear convergent rate results under wild assumptions.The fourth part studys the same problem and we use finite differences method to approximate the derivatives of the objective function at first.And then we use the same affine scaling matrix to define the affine scaling cubic regularization model.The subproblem of the method is different from the former method as it globally minimizes the cubic model instead of only obtaining a Cauchy step in the former method we propose.Besides,when employing the line search technique,we use different decrease condition.Finally,we can also guarantee the global convergence and superlinear convergent rate with suitable assumptions.The numerical results show the methods are effective and feasible.The fifth part proposes an interior affine scaling cubic regularization method for solv-ing linear inequality constrained optimization.Coleman et al[20]proposed an affine scal-ing technique to handle the linear inequality constraints and formed an augmented trust-region subproblem with linear equality constraints when employing the general trust-region method.At each iteration,the trial step generated by the subproblem needs to be strictly feasible which is a hard process requiring repeatedly solving the subproblem.We take full use of the affine scaling technique proposed by Coleman et al[20]to defme our affine scaling cubic regularization model.Further,to avoid repeatedly solving the cubic model,we em-ploy line search technique to modify the trial step so that it can always be strictly feasible.From all the ideas we mentioned above.we propose an interior affine sealing(lerivative-free cubic regularization method for linear inequality constrained optimization using a polyno-mial interpolation technique,prove the global convergence and local superlinear convergent rate,and the numerical results show effectiveness and feasibility.At the sixth part of the dissertation,we studys a system of nonlinear equations with linear inequality constraints.From the main ideas of the second part and fifth part,we can solve a least-squares problem with linear inequality constraints instead of the original problem.By employing a polynomial interpolation technique,we interpolate all the com-ponents to form the objective function.Then we can apply the method from fifth part to solve the transformed problem.Under suitable assumption,we show the convergent results.Finally,we summarize the main research results of the thesis and give some ideas for further study.
Keywords/Search Tags:optimization, affine scaling, inexact line search, global convergence, adaptive regularization with cubic, local convergence, derivative-free optimization
PDF Full Text Request
Related items