Font Size: a A A

Neural Networks For Optimization And Their Stability Analysis

Posted on:2007-08-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:X S ShenFull Text:PDF
GTID:1100360212477643Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Recurrent neural networks have powerful computational capabilities, and are oneof the most important types in neurocomputing. By means of the projection method,adding a small disturbing function to the objective function, the penalty method, andthe gradient method, this dissertation proposes several neural networks for solvingsaddle point problems and minimum radius of ball-coverings problems, and investi-gates the stability of the proposed networks by the LaSalle invariance principle andthe Lyapunov method, so as to design neural networks which avoid getting stuck inthe local minimum. It is divided into five chapters.Chapter 1 presents a survey of the study of optimization computing by neural net-works, including the stability analysis for recurrent neural networks, and neural net-works for solving saddle point problems and minimum radius of ball-coverings prob-lems.Chapter 2 converts the solutions of saddle point problems in Hilbert spaces intothe equilibrium points of some dynamic system by means of the projection method,and gives some conditions of global asymptotic stability by extending the LaSalleinvariance principle to Hilbert spaces. The results in this chapter are the theoreticalkey links in the construction of the neural networks in the next two chapters.In chapter 3, it first proposes a neural network for constrained saddle point prob-lems by means of the projection method, then shows that the proposed network isglobally asymptotically stable under some mild conditions. The proposed networkalso can be applied to minimax problems involving discrete and continuous variables.The proposed network contains those in [1, 2] as its special cases, and the obtainedstability results extend and weaken those in [2–5]. Simulation results demonstrate theeffectiveness of the proposed network.Chapter 4 presents a neural network for saddle point problems by adding a smalldisturbing function to the objective function, and shows that the proposed network isglobally exponentially stable and the solution of the problem is approximated globallyand exponentially, without any additional convexity assumptions. Thus, it can expo-nentially solve saddle point problems, including those problems which the existing...
Keywords/Search Tags:neural network, global stability, saddle point problem, ball-coverings
PDF Full Text Request
Related items