Font Size: a A A

A Study Of Constrained Learning Algorithms Encoding The A Priori Information Of Problem

Posted on:2007-03-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:F HanFull Text:PDF
GTID:1118360185951329Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Since only the desired input/output information without the structure properties of network and the α priori information or constrained conditions of the involved problems is considered, traditional gradient-based backpropagation (BP) algorithm and its many improved learning ones can not avoid the drawbacks of slow convergence rate and worse generalization performance. Therefore, the study which encodes the a priori information into learning algorithm is certainly an important research direction, which can make the network trained with a right direction instructed by the α priori information. This study can greatly reduce learning time of the network and improve the generalization performance of the network. This thesis mainly undertakes a deep and systematic study on constrained learning algorithms (CLA) incorporating the a priori information from problem. The main works in this thesis can be introduced as follows:1. A class of improved learning algorithms which encode additional functional constraints is presented in this thesis. In these algorithms, the additional functional constraints are obtained from the first-order or second-order derivative information of the activation function of hidden neurons and output neurons, and they are incorporated into additional cost function terms. This class of new algorithms can penalize simultaneously input-to-output mapping sensitivity and high frequency components of weights which are produced in the course of learning, which can improve the generalization performance and convergence rate of network. Moreover, the impacts of the various combinations of the additional functional constraints of hidden neurons and the ones of output neurons, the number of hidden neurons and free parameters of the algorithms on the generalization performance of the network are lucubrated. Specifically, another class of learning algorithms which incorporate magnified gradient function into the above new constrained learning algorithms is also proposed to converge faster and enhance the chance to avoid the local minima. Finally, compared with traditional gradient-based learning algorithms, these new constrained learning ones used to solve time series problems behave better performance.
Keywords/Search Tags:Feedforward neural networks, αpriori information, constrained learning algorithm, backpropagation algorithm, extreme learning machine, particle swarm optimization, generalization performance, convergence rate, function approximation
PDF Full Text Request
Related items