Font Size: a A A

Continous Attractors Of Recurrent Neural Networks

Posted on:2011-12-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:H X ZhangFull Text:PDF
GTID:1118360308967200Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
The problem of how the brain works and what intelligence has been attractingmany scientists in the area of computers science, biology, neural science, medicine,physics and so on. Neuroscience research indicates that brain is made of neurons.The neurons are connected into a network, and, the network's knowledge and mem-ories are distributed throughout its connectivity. By the network, the brain canprocess into and out of streams of information. Neural Networks are promisingapproach to try to give the answer. It is a great improvement because their archi-tecture are based on real nervous systems. Neural network researchers do a lot ofwork since it was introduced. Many results on neural networks have been publishedin the top-ranking international journals, such as, Science, Nature and so on.Neuroanatomists have known that the real brain is saturated with feedbackconnections. So, researches on recurrent neural networks are more important to un-derstanding how the brain works. In recent years, recurrent neural networks (RNNs)have been well studied, many theories and applications has come forth. While thisis now a well studied and documented area, specific emphasis is given to subclass ofsuch models, called continuous attractors neural networks, which are beginning toemerge in a wide context of biologically inspired computing. In a number of neu-robiological models, continuous attractors have been used to represent continuousquantities like eye position, head direction, and orientation of a visual stimulus. Thefrequent appearance of such models in the studies of brain functions gives some indi-cation that this model might capture important information processing mechanismsused in the brain.Mathematically, in the general multi-stable RNNs, attractors are separated,every point attractor is surrounded by a basin of attraction that used to associativememory. While a CANNs are not useful as associative memory because perturba-tions of a network state can trigger di?erent attractor states. It is interesting thatthere are indications that the brain uses mechanisms of continuous attractors forshort-term memory stores. Though many results have been documented in CANNs,there still needs mathematical theories to fully understand them. The main contributions of the dissertation are as follows:(1) In the second chapter two classes of place encoding continuous attractorsRNNs based on background neural network is proposed. In one model, the inhibitionamong neurons is realized through a kind of subtractive mechanism. In other one,the RNNS is proposed without lateral inhibition. It shows that if the synapticconnections are in gaussian shape and other parameters are appropriately selected,the networks can exactly realize a continuous attractor dynamics.(2) The third chapter continuous attractors of Lotka-Volterra (LV) RNNs arestudied. It belongs to rate encoding continuous attractors networks. Conditions aregiven to insure the network has continuous attractors. Representation of continuousattractor is obtained under the conditions.(3) The basic theories of permitted and forbidden sets in generalized brain-state-in-a-box (GBSB) recurrent neural networks are investigated in the forth chap-ter. By giving a proper energy function, some new qualitative properties of thenetwork are presented. Necessary and su?cient conditions for the GBSB recur-rent neural network are obtained for the existence of permitted and forbidden sets.These concepts enable a new perspective of GBSB-based associative memories, thememory can be retrieved by some external input which is more controllable than theinitial point, in which the stable equilibrium points of the network are constrainedto the arris or surface of the hypercube. It also shows that, based on the conceptsof permitted and forbidden sets, GBSB network can implement group selection bystrong lateral inhibition.(4) In the fifth chapter, the dynamic shift mechanism of modified backgroundneural networks and LV recurrent neural networks are studied. In modified back-ground neural networks, if the external input is a gaussian shape with its centervarying along with time, by adding a slight shift to the weights, the symmetry ofgaussian weight function is destroyed. Then, the activity profile will shift continu-ously without changing its shape, and the shift speed can be controlled accuratelyby a given constant. In LV recurrent neural networks, conditions are also obtainedto show the dynamical changing of continuous attractors.
Keywords/Search Tags:Recurrent neural networks, Continuous attractors, Forbidden sets, Permitted sets, Dynamic shift mechanism of continuous attractors
PDF Full Text Request
Related items