Font Size: a A A

Investigation On Delayed Feedback Neural Networks And Two-layer Feedback Neural Networks

Posted on:2009-01-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:G K WuFull Text:PDF
GTID:1118360272488827Subject:Condensed matter physics
Abstract/Summary:PDF Full Text Request
Feedback neural network is one of the most important neural network, the most remarkable feature of this kind of network is the associative memory function. The most important paradigm of feedback neural network is the Hopfield network which has been studied widely, but the Hopfield network and its modifications have some serious limitations, such as the spurious attractor problem and low storage capacity problem. These limitations restrict their applications greatly. Moreover, Hopfield network is a typical nonlinear dynamical system. The progress of nonlinear dynamical systems in the past ten years has shown that there exist huge limit-cycle solutions as well as fixed-point solutions. At present, most of feedback neural networks make use of fixed-point attractors to store information. This method does not make full use of the limit-cycle solutions which exist hugely in nonlinear dynamical systems, so it is a waste in a certain sense.Recently, a global learning rule (Monte Carlo adaptation rule) for feedback neural network with associative memory is proposed by Prof. Hong Zhao [Phys.Rev.E.70, 066137(2004)]. The basic idea of this learning rule is to obtain a certain optimization by continuously changing the elements of coupling matrix selected randomly. The feedback network designed by the learning rule has some very interesting dynamical behaviour. Particularly, there exist three different dynamical phases: chaos phase, pure memory phase and mixture phase. In the pure memory phase, the spurious attractors are suppressed completely, therefore it is very favourable for associative memory.In this thesis, we investigate two problems by using the Monte Carlo adaptation rule. The first one is to extend the method of Monte Carlo adaptation rule to delayed feedback network, so that the method can be used for designing delayed feedback networks directly. We study the dynamics performance of the network designed by the method in detail, such as spurious attractors and storage capacity. The second one is to extend the method of Monte Carlo adaptation rule, so that it can be directly used for designing the networks which store information as limit cycles. We study the dynamical behaviour and performance of the limit-cycle networks (including multilayer feedback networks and delayed feedback networks), and compare the differences between it and the corresponding fixed-point networks.In the first part of this thesis, we extend the Monte Carlo adaptation rule to the delayed feedback neural network for storing memory patterns as fixed-point attractors or limit-cycle attractors, and then investigate the storage capacity and dynamics of it. It is found that the storage capacity of the networks is in proportion to delay length as in the networks trained by the correlation learning based on Hebb's rule, but is much higher than the latter. The generalization capacity of the networks is also higher than the latter. Another interesting finding is that the spurious attractors totally disappear in the networks trained by the Monte Carlo adaptation rule if the memory limit cycles are sufficiently long. As an example, we demonstrate the application of delayed feedback network for storing limit cycles which have common intersection points, the network recalls the whole number series by a part of it.In the second part of this thesis, we construct a two-layer feedback neural network using the Monte Carlo adaptation rule to store memory patterns as fixed-point attractors or as limit-cycle attractors, and then investigate the dynamics of it. We compare the dynamics of the network with limit-cycle attractors and with fixed-point attractors. It is found that the former has better retrieval property than the latter. Particularly, spurious attractors can be suppressed completely when the memory patterns are stored as a long limit cycle. In this part, we also demonstrate a feasible application of limit-cycle-attractor networks, the network recall whole picture by a segment of it.
Keywords/Search Tags:feedback neural network, associative memory, limit cycle, attractor, dynamical system
PDF Full Text Request
Related items