Font Size: a A A

Convergence Analysis Of Algorithms For Interval Neural Networks

Posted on:2015-09-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:D K YangFull Text:PDF
GTID:1228330467487156Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Artificial neural networks have been successfully applied to various domains, due to their unique structure and processing information methods. However, in real-life situations, available information is often uncertain, imprecise and incomplete, which can be represented by interval data or fuzzy data. Since traditional neural networks always deal with the determinate data sets, they almost can not deal with these indeterminate data sets. Interval analysis is a tool to deal with interval data in numerical computing. Since multi-layer feed-forward neural networks have high nonlinear mapping capability, interval neural networks, combining interval analysis with neural network, can deal with interval data.There is relatively few theoretical analysis of interval neural networks, even though they have been successfully applied in some fields. Therefore, it is practical to analyze the learn-ing ability and convergence of interval neural networks, which will make a great promotion for application of interval neural networks. In this dissertation, the learning algorithms and conver-gence results are studied for the different interval neural networks models. The main results of this dissertation can be summarized as follows:1. In Chapter1, some background information about traditional neural networks and interval analysis are introduced.2. In Chapter2, a smoothing interval neuron is proposed to prevent the weights oscillation in the original interval neuron during the learning procedure. Here, by smoothing we mean that, in the activation function, we replace the absolute values of the weights by a smooth function of the weights. For the fixed learning rate and the variable learning rate, the convergence of a gradient algorithm for training the smoothing interval neuron is proved, respectively.3. In Chapter3, a interval neural network which is consist of interval neuron is considered, a smoothing interval neural network which is consist of smoothing interval neuron in-troduced by the Chapter2is proposed to prevent the weights oscillation in the original interval neuron during the learning procedure. The convergence result of a gradient algo-rithm for training the smoothing interval neural network is obtained.4. In Chapter4, a modified gradient-based learning algorithm is proposed for the one-layer interval perceptron. In the modified learning algorithm, for the radius of each interval weight, the function of the absolute value in the original learning algorithm is replaced by a quadratic term. This modified learning algorithm, compared with original learning algorithm, can prevent the weights oscillation during the learning procedure. The mono-tonicity of the error function and convergence for the modified learning algorithm are proved.5. In Chapter5, extreme learning machine (ELM) is applied for learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. Since the BP learning algorithm is very slow and ELM has fast learning speed, ELM is applied to interval neural networks. ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM) which is much faster than the usual BP algorithm. Numerical experiments show that IELM is much faster than the usual BP algorithm and the generalization performance of IELM is much better than that of BP.
Keywords/Search Tags:Interval analysis, Interval neuron, Interval neural network, Gradient algo-rithm, Smooting, Convergence
PDF Full Text Request
Related items