Font Size: a A A

Fault tolerance of feedforward artificial neural nets and synthesis of robust nets

Posted on:1995-02-12Degree:Ph.DType:Thesis
University:University of Massachusetts AmherstCandidate:Phatak, Dhananjay SFull Text:PDF
GTID:2478390014490257Subject:Electrical engineering
Abstract/Summary:
A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck-at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy.;Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a necessary condition that holds for all feedforward nets, irrespective of the network topology or the task it is trained on. Extensive simulations indicate that the actual redundancy needed to synthesize a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data imply that the conventional TMR scheme of treplication and majority vote is the best way to achieve complete fault tolerance in most ANNs.;Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with and degrade gracefully. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing.;The last part of the thesis develops improved learning algorithms that favor fault tolerance. Here, the objective function for the gradient descent is modified to include extra terms that favor fault tolerance. Simulations indicate that the algorithm works only if the relative weight of the extra terms is small.;There are two different ways to achieve fault tolerance: (1) Search for the minimal net and replicate (2) Provide redundancy to begin with and use improved training algorithms. A natural question is: which of these two schemes is better? Contrary to the expectation, the replication scheme seems to win in almost all cases. We provide a justification as to why this might be true.;Several interesting open problems are discussed and future extensions are suggested.
Keywords/Search Tags:Fault tolerance, Nets, Feedforward, Robust, Anns
Related items