Font Size: a A A

On Solving Non-Lipschitz Optimization Problems With Neural Network

Posted on:2016-03-13Degree:MasterType:Thesis
Country:ChinaCandidate:M XieFull Text:PDF
GTID:2308330464968534Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Nowadays, it is common to encounter optimization problems in the field of science and engineering, such as optimal control, pattern recognition and image processing and so on. Traditionally, it is common to use numerical methods for solving linear or nonlinear programming problem. Because the time complexity of numerical methods depends on the dimension and structure of the optimization problem,so that it can not get solution in a real time. Owing to artificial neural networks has the rapid convergence and computing power, so that it is an effective method to get real-time solution.In recent decades, more and more people use neural network to solve optimization problems, especially for non-smooth convex optimization problem or non-convex optimization problem, but these optimization problems are built on the basis of the Clark’ generalized gradient. In other words, the optimization problems must satisfy Lipschitz condition. However, not all of the application of the objective function satisfies this condition. In order to make the neural networks more universality and versatility, the paper especially study non-Lipschitz optimization problems.The paper is structured as follows:Firstly, the paper presents a novel network modeled by means of a differential inclusion and the smoothing approximate techniques for solving non-Lipschitz optimization problems where the objective function is non-Lipschitz and the feasible region consists of nonlinear inequality. The uniform boundedness and globality of the solutions of the smooth neural network are proved through detailed theoretical analysis. Moreover, any accumulation point of the solutions of the smooth neural network is a stationary point of the optimization problem.Moreover, the paper presents a novel network modeled by means of a differential inclusion and the smoothing approximate techniques for solving non-Lipschitz optimization problems where the objective function is non-Lipschitz and the feasible region consists of linear inequality, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. It is proved that if a single design parameter in the model is larger than a derived lower bound, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that any accumulation point of the solutions of the smooth neural network coincides with the stationary point of the optimization problem.
Keywords/Search Tags:Artificial neural networks, non-Lipschitz optimization problems, the smoothing approximate techniques, differential inclusion, accumulation point, stationary point
PDF Full Text Request
Related items