Font Size: a A A

Algorithm Design And Convergence Analysis For Fault Tolerance Neural Networks

Posted on:2016-06-14Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q ChangFull Text:PDF
GTID:2308330461973868Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Neural Networks have been widely used because of its wonderful nonlinear approximation ability. In order to improve the ability of fault tolerance of neural networks, we focus on the algorithm design and theoretical analysis for weight noise-injection networks. We introduce Group Lasso penalty to the objective function for fault tolerance neural networks by borrowing its statistic learning thoughts, which demonstrates good group structure manners. It then efficiently improves the gener-alization of the network and the pruning ability for hidden neural nodes. However, there are two problems needs to be overcomed since it’s nondifferentiable at origin: 1) it is prone to oscillating in numerical simulations; 2) it is difficult to investigate its convergence results directly. By hiring smoothing approximation method, we pro-posed a fault tolerant neural network learning algorithm with noise-injection weight, which effectively avoids numerical oscillation. Firstly, we derive the weak and strong . convergence results for the smoothing fault tolerant neural network algorithm based on the fact that the approximation function is continuously differential. By virtue of the definition of Clarke gradient, we have strictly proved that the nonsmoothing approximation fault tolerance algorithm has the same convergence result as well under some special conditions.
Keywords/Search Tags:Fault tolerant neural network, Group Lasso penalty, Smoothing approximation, Clarke gradient, Convergence
PDF Full Text Request
Related items