Font Size: a A A

Design and training of neural networks using computational geometry

Posted on:1995-05-13Degree:Ph.DType:Thesis
University:The Pennsylvania State UniversityCandidate:Garga, Amulya KumarFull Text:PDF
GTID:2478390014490477Subject:Engineering
Abstract/Summary:
Artificial neural networks offer a different method of computation which promises to alleviate several problems encountered in sequential processing. Several gradient-based approaches to training feedforward neural networks exist which are simple but are usually slow and are not guaranteed to converge. In this thesis, a significantly different approach is taken toward neural network design and training. This rigorous approach draws upon results from computational geometry and neural networks. We show how to design a feedforward neural network (VONNET) that performs pattern classification, based on the construction of the Voronoi diagram or Delaunay tessellation of points, representing the exemplars in multidimensional feature space. The procedures are efficient, robust, and guaranteed to converge because they employ geometric information that is not usually used. They also provide structural adaptation as well as the connection weights and thresholds for a neural network, eliminating the guesswork usually necessary in other techniques. In addition to providing exact classification, they are unique in suggesting equivalent alternate structures, providing flexibility in satisfying additional design specifications. Results pertaining to various properties of VONNETs and the design and training procedures are also established.; We show that neural networks can be used to solve some hard problems in computational geometry by showing how a trainable neural network can construct the Delaunay tessellation over specified points. This leads to the exciting prospect of an all-neural approach to the pattern classification problem. The issue of robustness to noise is formally discussed and a method to reduce the size of VONNETs without unduly sacrificing classification performance is given. A reduced VONNET, constructed using this method, which solves the n-bit parity problem achieved optimal robustness and the lower bound in size. A design procedure for recurrent neural networks that use VONNETs and are capable of learning trajectories is given. Results about the size of such networks are given and several open problems are proposed. This thesis also represents an initial step toward developing a realizability theory for neural networks. The efficient and robust nature of VONNETs makes them valuable for many real-time applications, and in general, most applications that require pattern classification.
Keywords/Search Tags:Neural networks, Design and training, Pattern classification, Computational, Vonnets
Related items