Font Size: a A A

Several Problems In Multi-stable Recurrent Neural Networks

Posted on:2011-07-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:W ZhouFull Text:PDF
GTID:1118360308465892Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Since their rebirth in the early 1980s, neural networks have attracted the atten-tion of scientists and technologists from a number of disciplines. Neural networkshave come to be useful and applicable in the solving of real-world problems, suchas economy, military, engineering, medicine, finance, etc.Multistability is an important research topic in neural networks. A multi-stablenetwork can have multiple stable equilibriums, while a mono-stable one always hasonly one stable equilibrium. Therefore, multi-stable network always outperformsmono-stable one, such as dealing with complex optimization problems which em-body many optimization solutions.The research focuses on discrete-time multi-stable recurrent neural networks(RNNs). Compared with continuous-time neural networks, discrete-time neuralnetworks have some advantages for direct computer simulations and implementationin digital hardware.According to data processing type, this dissertation can be divided into twomain parts:1. Complex-valued neural networks researchComplex number calculus has been found useful in such areas as electrical en-gineering, informatics, control engineering, bioengineering, and other related fields.It is therefore not surprising to see that complex-valued neural networks, which dealwith complex-valued data, complex-valued weights, and neuron activation functions,have also been widely studied in recent years. The contributions are:(1) Convergence analysis for a class of discrete time RNNs with multi-valuedneurons (MVN) in synchronous update mode. Two theorems are presented andsimulation results are used to illustrate the theory.(2) Convergence analysis for a class of discrete-time RNNs with complex-valuedlinear threshold (LT) neurons. It addresses the boundedness, global attractivity, andcomplete stability of such networks. Some conditions for those properties are alsoderived. The motivation for this study includes two main objectives: the first one isto extend the convergence study on the real-valued LT RNNs to the complex-valuedones, and the second one is to develop some analysis method for complex-valued RNNs from the dynamic system view and may be beneficial to the analysis of othercomplex-valued RNNs properties in future.2. Real-valued neural networks research and its applicationThe contributions are:(1) Competitive Layer Model (CLM) for a class of discrete-time RNNs withLT neurons. It first addresses the boundedness, global attractivity and completestability of the networks. Two theorems are then presented for the networks to haveCLM property. The analysis for network dynamics shows that the networks performa column Winner-Take-All (WTA) behavior and grouping selection among di?erentlayers. Furthermore, a novel Synchronous CLM (SCLM) iteration method is pro-posed, which has similar performance and storage allocation but faster convergencecompared with previous Asynchronous CLM (ACLM) iteration method. Examplesand simulation results are used to illustrate the developed theory, the comparisonbetween two CLM iteration methods, and the application in image segmentation.(2) A class of discrete-time RNNs with LT neurons for solving traveling sales-man problem (TSP). It first addresses the boundedness and complete stability,thengives a theorem to ensure all the networks'iteration solutions to be valid solutions.An algorithm based on such networks with a local escape way is proposed too. Sim-ulation results are used to illustrate the developed method. Compared with theTSP solutions done by Lotka-Volterra RNNs, new method has better performance.
Keywords/Search Tags:recurrent neural networks, multi-stability, complex-valued neural networks, convergence analysis, dynamical analysis
PDF Full Text Request
Related items