Font Size: a A A

Artificial neural networks: Learning algorithms, performance evaluation, and applications

Posted on:1992-08-22Degree:Ph.DType:Dissertation
University:University of Toronto (Canada)Candidate:Karayiannis, Nicolaos BFull Text:PDF
GTID:1478390014498731Subject:Computer Science
Abstract/Summary:
This dissertation studies the training, performance and applications of feed-forward neural networks. A new formulation of the training problem provides a family of Efficient LEarning Algorithms for Neural NEtworks (ELEANNE), which achieve better convergence than the already existing algorithms. This family includes learning algorithms with second-order convergence properties for the training of single-layered neural networks. These algorithms provide the basis for the development of learning algorithms for multi-layered neural networks which achieve better convergence than the Error-Back-Propagation algorithm. A generalized criterion for the training of neural networks is then proposed. According to this criterion, the internal parameters of the network are updated by minimizing an error function which changes during the training of the network. Depending on the optimization strategy used, this generalized criterion results in a variety of fast learning algorithms for neural networks. The performance of various neural network architectures and learning schemes is subsequently studied. Extensive analysis reveals the effect of the training scheme on the capacity and generalization efficiency of single-layered neural networks. In addition, several experiments provide the basis for the performance evaluation of multi-layered neural networks. The architecture, training, and performance of high-order neural networks are subsequently studied. Neural networks with composite key patterns are also proposed as the essential generalization of high-order neural networks. Finally, a general methodology for the development of neural network systems provides successful neural network systems for decision making, classification, prediction, and associative recall.
Keywords/Search Tags:Neural, Learning algorithms, Performance, Training, Achieve better convergence, Provide the basis
Related items