Font Size: a A A

Neural Network Optimization Algorithm And Application Based On KKT Condition

Posted on:2017-07-18Degree:MasterType:Thesis
Country:ChinaCandidate:H J CheFull Text:PDF
GTID:2348330503483850Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Nonlinear programming is an important branch of operations research, it is widely applied to information processing, intelligent control, investment portfolio and so on. In recent years, with the fast development of the Internet and big data, traditional optimization algorithms have to confront with enormous challenges especially in real-time data analysis and efficiency of solving large-scale optimization problems, classical optimization methods have few advantages to satisfy users' requirement. Therefore, it's quite urgent to establish a novel efficient method which could be applied in engineering area.Due to the characteristics of neural network's structure, it could process information concurrently and realize nonlinear mapping easily, as a result, more and more researchers join in neural network study. Currently, the theory of neural network has improved a lot, however, it also has following problems: 1) There is no general method to solve continuous non-convex problem efficiently. 2) Complex-valued neural network optimization algorithm is quite time-consuming, it couldn't satisfy real-time requirement. 3) It needs some special designs when neural networks deal with specific engineering problem. 4) The potential ability in distributed optimization,multi-objective optimization,vector optimization and goal programming need to be discovered.In this paper, two neural networks are designed based on convex theory, Lyapunov stability theory and natural computation. We also utilize them solving continuous non-convex problem with equality constraints and convex quadratic programming. Concretely, the main results and originality of this paper are shown as follows:1) In order to solve continuous non-convex problem with equality constraints, we utilize variational inequalities and projection operators to construct a projection neural network. In the second chapter, we propose a swarm neural networks framework based on Shuffled frog leapingAlgorithm. Then we study the convergence of this new algorithm via the characteristics of projection function. Finally, we use it to handle several classical benchmark functions, the results show the efficiency of the new algorithm.2) According to gradient optimization method, this paper proposed a gradient recurrent neural network(RNN) and utilizing it to tackle with beamforming in communication systems. In the third chapter, we used LaSalle principal and Lyapunov energy function to study the stability and convergence of the RNN. Meanwhile, comparing with several algorithms, the proposed neural network could get better results in some certain problems.
Keywords/Search Tags:Neural network, Projection operator, Non-convex optimization, Lyapunov energy function
PDF Full Text Request
Related items