Font Size: a A A

Research On Non-Smooth Optimization Problems By Lagrange Neural Network

Posted on:2015-05-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y YuFull Text:PDF
GTID:2298330431483943Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Nowadays, optimization problems are often encountered in science and engineering fields, such as signal processing, optimal control, statistics, pattern recognition and so on. Neural network method for solving optimization problems provides an effective direction. By using highly parallel computing and relatively simple neural network architecture, relatively complex optimization problem can even be solved in real time. In the past three decades, researchers have proposed a lot of neural network models to solve the optimization problems, making the smoothing optimization problems well resolved. But in practical applications, the non-smooth optimization problems are more general and universality. In this paper, the main work is to study solving non-smooth optimization problems using the Lagrange neural network model.The main works of this paper are organized as follows.Firstly, in this paper, the objective function of the non-smooth optimization problems is locally Lipschitz and the feasible set of that consists of a set of equality constrained smoothing and convex function. The non-smooth function is conversed into smooth function by being applied with the smoothing approximate techniques. Moreover, the Lagrange neural network is modeled by a class of differential equations, which can be implemented easily. The methodology is based on the Lagrange multiplier theory in optimization and seeks to provide solutions satisfying the necessary conditions of optimality. It is proved that any equilibrium point of the network is a subset to the critical point set of primal problems, and that when the objective function of primal problems is convex, the minimum set coincides with the equilibrium point set of the network. Finally, a simulation experiment is presented to illustrate above theoretical finding by the Matlab programming.Secondly, the traditional penalty function neural network approach to solve optimization problems has great difficulty in computing. However, the augmented Lagrangian neural networks can effectively solve this problem on the calculation. The theory of Lagrange multipliers and penalty methods are used to establish a differential inclusion of augmented Lagrangian neural networks. The objective function of non-smooth optimization problem is locally Lipschitz functions, and the feasible region consists of a group of equality constraints functions.Finally, the constraint functions are merged into a modified objective function to handle the constraints. Compared to the existing penalty function method neural networks to solve the non-smooth optimization problems, the dynamic trajectory can be rapidly introduced into the feasible region by the neurons of Lagrange neural networks. Under the conditions that the objective function is convex, the neural networks can reach the equilibrium state by the theory of the energy function non-increasing. And the dynamic trajectory eventually converges to the critical point set of the primal problem. Finally, a simulation experiment is presented to illustrate above theoretical finding.
Keywords/Search Tags:Lagrange neural network, Non-smooth optimization problems, Energy function, Locally Lipschitz function, Differential inclusion
PDF Full Text Request
Related items