Font Size: a A A

Research On Two Kinds Of Neural Networks To Solve Nonsmooth Pseudoconvex Optimization Problems

Posted on:2022-07-27Degree:MasterType:Thesis
Country:ChinaCandidate:H X LuFull Text:PDF
GTID:2518306536954659Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Artificial neural network has many advantages such as large-scale parallel processing mechanism and fast convergence to the optimal solution.The non-smooth non-convex optimization problem appears widely in all kinds of scientific and engineering applications,among which the pseudo-convex optimization problem is a kind of special non-convex optimization problem,so the use of artificial neural network to solve the pseudo-convex optimization problem has great research value.In this paper,we propose two different kinds of neural networks to solve the non-smooth pseudoconvex optimization problems with inequality and equality constraints.The main research work is as follows:Firstly,based on the theory of differential inclusion and the idea of penalty function,a recursive neural network model of a single layer is proposed.Compared with the existing neural network models,this model has no special requirements for the selection of initial points,simple structure and only single layer,and does not need to calculate the precise penalty factor in advance.And through rigorous theoretical analysis,it is proved that for any initial point,the state solution of the neural network converges to the feasible region in finite time and stays in it all the time,and finally converges to the optimal solution of the original problem.Finally,the correctness of the theory is verified by numerical experiment.Secondly,based on the theory of differential inclusion and the idea of penalty function,another novel single-layer recursive neural network model with a regular term is proposed.Through rigorous theoretical analysis,it is proved that for any initial point,the state solution of the neural network converges to the feasible region in finite time and stays in it all the time,and finally converges to the optimal solution of the original problem.Finally,the correctness of the theory is verified by numerical experiment.The advantages of this model are that there is no need to calculate complex penalty parameters in advance,and the structure is simple,and the initial point of the neural network can be arbitrarily selected without requirements.
Keywords/Search Tags:neural network, nonsmooth pseudoconvex optimization, differential inclusion, penalty function, convergence
PDF Full Text Request
Related items