Font Size: a A A

The Research For Several Neurodynamic Optimization Problems

Posted on:2015-05-17Degree:MasterType:Thesis
Country:ChinaCandidate:G X WuFull Text:PDF
GTID:2298330422491678Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
It is well-known that optimization problems play an important role in manyengineer areas. In practical life, many problems can be considered as the relatedoptimization problems. With many defects of the traditional optimization methodsexposed, people begin to search for better methods to solve the optimizationproblems. The neural network has the paralleled distribution computation inlarge-scale and fast convergence, therefore, more and more scholars observe theimportance of neurodynamic optimization. At the same time, the research onneurodynamic optimization is in the ascendant, and all kinds of typical researchachievements are springing up constantly. Based on neurodynamic optimization andnonsmooth analysis theory, two classes of convex optimizations are studied in thispaper.Firstly, we study a class of nonsmooth optimization problem with equality andinequality constraints. Based on the Tikhonov regularization, a one-layer neuralnetwork is proposed for solving such a nonsmooth optimization problem. Then, theasymptotic stability of proposed neural network will be proved by the Lyapunovstability theorem. Compared with the existed neural networks for solving thisproblem, the advantages of the proposed neural network in this paper has lowermodel complexity and no penalty parameters introduced. Especially, the validity ofthe neural networks avoids the assumptions for the coerciveness of objectivefunction or the boundedness of feasible region condition.Secondly, we study a class of quadratic programming problems with inequalityconstraints. Based on the Karush-Kuhn-Tuker condition, a gradient-based neuralnetwork is proposed and it is proved that the equilibrium set of the proposed neuralnetwork just is the optimal solution set of quadratic programming problems. In theend, the asymptotic stability of proposed neural network will be proved by applyingLaSalle’s invariance principle and ojasiewicz inequality. The superiority of neuralnetwork is that the convergence of the neural network does not depend on otherassumptions.
Keywords/Search Tags:Nonsmooth convex optimization, Quadratic programming, Tikhonovregularization method, Neural networks
PDF Full Text Request
Related items