Font Size: a A A

Optimization Method Research Of Learning In Feedforward Artificial Neural Network Using Derivative Constrained Relation

Posted on:2007-07-07Degree:MasterType:Thesis
Country:ChinaCandidate:R F YangFull Text:PDF
GTID:2178360212465623Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
The principle of artificial neural networks is based on that of human brain working. The abilities of an artificial network are implemented by cooperation of mass of neural units. One important characteristic of artificial neural networks is its ability of learning. In essence, learning in artificial neural networks is an optimization process, that is, an artificial network adjusts the weights of the network on its concrete error information.Currently, the optimization algorithm about artificial neural networks learning is a method of training networks only according to the error of output data, without importing relationship of the data of samples effectively, which results in lacking the ability of generalization and practicability. Using the error of output data as criterion to training networks is interpolation for data of samples. It is not surprising to get big error in non-sample data for such interpolation is non-smooth. Hence, in training networks, besides the information offered by sample data, we should study the relationship of sample data and transform it into constraint conditions to use in the optimization learning of networks.When establishing input and output of networks model, the most important guidance information is the derivative relation between input and output. Only when we establish such derivative relation, we can get accurate numerical parallelism between input and output.In this paper, we take optimization learning of the neural network as main line and aim to improve the speed and quality of training neural network. We emphasize the method of optimization learning of neural networks and propose importing derivative constraint relations in training networks. Our researches include: the principle and the strategy on how to guide the derivative relations in optimization learning of neural networks, drawing and modeling of the derivative relations, algorithm design of neural networks training based on derivative constraint relations, and Matlab simulation design to carry on. By our researches and labs we draw the conclusion that the methods in this paper can reduce the error of the output network greatly, we have less numbers of training and the generation ability of the network is improved.
Keywords/Search Tags:Neural Networks, Learning Optimization, Derivative Relations
PDF Full Text Request
Related items