Font Size: a A A

Design methodology and stability analysis of recurrent neural networks for constrained optimization

Posted on:2001-10-15Degree:Ph.DType:Thesis
University:Chinese University of Hong Kong (People's Republic of China)Candidate:Xia, You-shengFull Text:PDF
GTID:2468390014953026Subject:Computer Science
Abstract/Summary:
Constrained optimization problems arise often in scientific research and engineering applications. Solving optimization problems using recurrent neural networks has been extensively investigated due to the advantages of massively parallel operations and rapid convergence. However, most existing recurrent neural networks have some limitations and disadvantages in computation or implementation except for a few neural networks for solving linear and quadratic programming problems.; In this thesis, we first describe the principles for designing recurrent neural networks, we then introduce three methods, with a deterministic procedure, for designing neural networks for constrained optimization. Two of them are used in designing continuous-time neural networks, and another is used in designing discrete-time neural networks. The advantage of the proposed method is fourfold. First, this method guarantees that any designed network is stable in the sense of Lyapunov and globally convergent. Thus the complex stability analysis for individual resulting network can be relaxed. Second, the derived neural networks can cope with problems with unbounded solution sets. Third, the gradient methods and non-gradient methods employed in existing optimization neural networks fall into the special cases of the proposed method, and thus the proposed method has more generality. Fourth, it may provide more alternative neural networks for simple implementation.; In the theoretical aspect, we prove that the proposed neural networks are globally convergent to exact solutions under the monotone and Lipschiz conditions of the mapping, under the conditions of monotonicity and symmetry of the mapping, or under the conditions of both the symmetric mapping and the bounded feasible set. Specially, the proposed neural networks are globally asymptotically stable when the solutions are unique. Furthermore, we prove their global exponential stability under the strongly monotone condition. In addition, we study the global stability of the Kennedy and Chua neural network so that the existing stability results are improved.; A new recurrent neural network, called the dual neural network, is presented. The number of neurons of the proposed dual network is equal to dimensionality of the workspace. Compared with the existing neural network for computing the inverse kinematics, the dual neural network has smaller size and desirable exponential stability. (Abstract shortened by UMI.)...
Keywords/Search Tags:Neural, Stability, Optimization, Method
Related items