Font Size: a A A

Computing Of Group Neurons Of Recurrent Neural Networks

Posted on:2009-01-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:L ZhangFull Text:PDF
GTID:1118360245461935Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
One of the most fascinating scientific challenges in our time is to understand the brain, the biological basis of our mental capabilities like thought, perception, learning, and memory. Artificial neural networks arouse the interests of many distinguished scientists in some top institutes or universities by reason of the inherent property of imitating the human intelligent behaviors and powerful parallel computational capability. Since the artificial neural networks grew up in 1980s, up to now, a lot of inspiring achievements have been obtained and the theories of neural networks have been applied in many fields, including finance, military, engineering, medicine, etc.. In addition, many results on neural networks have been published in the top-ranking international journals, for example, Science, and Nature, while some famous corporations, such as Intel and IBM, have devoted to develop the chips of artificial neural networks. All above have indicated that the research of artificial neural networks is very important both in theory and applications.The dynamical properties of artificial neural networks play primarily important roles in their applications. Usually, it is a stable neural network that is available in the applications. The stable computational mode of neural network can be roughly divided into two classes: monostability and multistability. The multistability essentially characterizes the group properties of neurons and deeply depicts the neural networks in essence, so the multistable neural networks have more powerful parallel computational capability. Therefore, the dynamical analysis of group nerons' computation has been the trend of investigation of artificial neural networks.The main contributions of the dissertation are as follows:(1) In the second chapter the multiperiodicity and attractivity are studied for a class of recurrent neural networks (RNNs) with unsaturating piecewise linear transfer functions. Periodic oscillation in RNNs is an interesting behavior since many biological and cognitive activties require repetition. Using local inhibition, conditions for boundedness and global attractivity are established. Moreover, multiperiodicity of the network is investigated by using local invariant set.(2) The third chapter focuses on the basic theories of permitted and forbidden sets in discret-time linear threshold recurrent neural networks are investigated. Those concepts enable a new perspective of the memory in neural networks. The memory can be retrieved by some external input which is more controllable than the initial point. Necessary and sufficient conditions for this class recurrent neural networks are obtained for complete convergence, existence of permitted and forbidden sets, as well as conditionally multiattractivity, respectively.(3) The concepts of unsaturated and saturated sets are proposed in the forth chapter to deeply describe some interesting dynamical properties in cellular neural networks. The basic theories of unsaturated and saturated sets of cellular neural networks are studied, and the sufficient and necessary conditions for existence of unsaturated and saturated sets are established. Based on such concepts, Winner-Take-All are further extended by using cellular neural networks with lateral inhibition to implement group selection. In addition, the corresponding relations between groups and unsaturated sets are established.(4) In the fifth chapter, the concept of activity invariant sets are proposed to study the exponential stable attractors for discrete-time linear threshold RNNs and Lotka-Volterra RNNs, respectively. Conditions are obtained for locating activity invariant sets. It also shows that an invariant set can have one equilibrium point which attracts exponentially all trajectories starting in the set. Since the attractors are located in activity invariant sets, each attractor has binary pattern and also carries analog information. Such results can provide new perspective to apply attractor networks for applications such as group winner-take-all, associative memory, etc..(5) In the sixth chapter, multistability are studied for two classes of neural networks: bidirectional associative memory recurrent neural networks with unsat-urating piecewise linear transfer functions and the background neural networks. By using local inhibition and energy function, it deals fully with the three basic properties of a multistable networks: boundedness, global attractivity, and complete convergence. Bounds on global attractive sets are then obtained. Moreover, it shows that shifting the background level affects the quantity, location, as well as stability of the equilibrium point in the background neural networks. This means the background neural network can exhibit not only monostability but also multi-stability. At last, a class of time-varying delayed neural networks with some kind of discontinuous monotone increasing functions is studied. The discontinuities in this class of neural networks are the ideal model of the situation where the gain of the neuron amplifiers is very high. Conditions ensuring the global convergence of the neural network are derived.
Keywords/Search Tags:Computing of group neurons, Permitted sets, Forbidden sets, Unsaturated sets, Activity invariant sets
PDF Full Text Request
Related items