Font Size: a A A

Study On Computational Methods For Two Typical Classes Of Nonlinear Problems

Posted on:2001-04-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:X B GaoFull Text:PDF
GTID:1100360002951296Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Since nonlinear problems more accurately describes the phenomena of the nature, thus this paper considers nonlinear optimization problems and two important classes of nonlinear evolu- tion equations, and neural networks for nonlinear optimization problems is stressfully studied. It is well known that optimization problems arise in a wide variety of science and technology, and their real-time solutions are often required. However, traditional algorithms for digital computers can not comply with desired time since the computing time required for a solution is greatly dependent on the dimension and structure of the problems. One possible and very promising approach to the real-time solutions for large scale optimization problems is apply artificial neural networks because of its inherent massive parallelism. There are many neural networks for nonlinear optimization problems, especially nonlinear program.tning, but the feasible and effective nets are fewer, and many nets have some drawbacks. In Chapter 2 to Chapter 8 of this paper, Several feasible and effective neural networks for nonlinear optimization l)loblems are proposed, and their properties are strictly analyzed and shown. Thus those models has very good stability and convergence and overcome the drawbacks of previous nets. Many optimization problems are thoroughly and effectively solved. The main research work and result is presented as follows Based on the necessary and sufficient conditions of the optimality, a new neural network for interval convex quadratic progranuning problems is constructed in Chapter 2. There is no parameter involved and the number of state variables which the net needs is fewer. The pro- posed net is Lyapunov stable and asymptotically convergent to an exact solution in the large. Moreover, it has generality, and includes the other models for convex quadratic prograninting. In Chapter 3, a neural network for extended quadratic program.ming problems is proposed by the necessary and sufficient conditions of a saddle point. There is also no parameter in it, and it has the same good stability and convergence as that in the chapter 2. The proposed net includes the models for linear and quadratic programming. In Chapter 4, a high performance feedback neural network for nonlinear convex programming problems is proposed by using the method of successive approximation to the optimality value from below, and constructing energy function sequence and corresponding a subnetwork for solving its minimum point. There is no dual vari- ables and penalty parameters in the proposed net, and the number of the state variables which the net needs is the least. It can be guaranteed to asymptotically converge to an exact optimal solution. In Chapter 5, a continuous-time neural network and a discrete-time one for nonlinear convex programming with quadratic objective function are defined on the whole space. There is no parameter in the continuous-time one, and the design parameter in discrete-time one is 7,- bounded and fixed. Thus they are more suitable for simulation or implementation in digital hardware. They are proved to be Lyapunov stable and asymptotically converge to an exact optimal solution in the large. In Chapter 6, a neural network for nonlinear convex programming with lower and upper bounds on each variable is constructed by transforming it into an equiva- lent variational inequality. There...
Keywords/Search Tags:Nonlinear programming, mixed nonlinear complementarity problem, neural network, stability, convergence, nonlinear evolution equation, finite difference method
PDF Full Text Request
Related items