Font Size: a A A

Novel Lagrange Neural Network For Nonsmooth Optimization Problems

Posted on:2017-12-18Degree:MasterType:Thesis
Country:ChinaCandidate:C Y LiFull Text:PDF
GTID:2348330488459204Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Optimization problem is one of the most important questions in science and engineering application, which included combination optimization and function optimization problems. Researchers have done a lot of work on it to find some solution. However, we need real-time solution in science and engineering applications, which cannot be given by the traditional method because of the complexity of the problem. Using artificial neural network is an effective approach to solve such problems. Due to its inherent massively parallel mechanism and the hardware structure of fast execution efficiency, the artificial neural network is significantly better than the traditional optimization algorithm.In recent decades, researchers have proposed some neural network model to solve the optimization problem. But the most of these models are based on the fixed penalty coefficient method, which must obtain the specific value of penalty coefficient before the network computing to make sure network can converge to the optimal solution set. Sometimes, these values are difficult to calculate. In this paper, using Lagrange theory, a new neural network model is proposed to solve the no smooth optimization problem. The value of penalty coefficient is variable, at the same time the neural network can still guarantee convergence to the optimal solution of optimization problem. Specific content is as follows:At first, the linear constrained optimization problems are analyzed, and a new Lagrange neural network model is proposed. After finite run time, the state vector x will stay in the feasible region, and converge to the key problem sets. Finally, simulation experiments illustrate the correctness of the conclusion.Then, the paper analyzes the nonlinear constrained optimization problems. Using the Clarke's generalized gradient of the involved functions and Lagrange method, a gradient system of differential inclusions is established. Firstly we prove the existence of network solutions based on the properties of the function, and use the energy function to certify the network has equilibrium point. Furthermore, if the problem is convex, the equilibrium point exactly reconciles to the solution of the programming problem.
Keywords/Search Tags:nonsmooth optimization, neural network, locally Lipschitz function, Lagrange function
PDF Full Text Request
Related items