Font Size: a A A

Techniques in neural network training with an enhanced robustness

Posted on:2004-12-18Degree:Ph.DType:Dissertation
University:University of IdahoCandidate:Manic, MilosFull Text:PDF
GTID:1468390011972264Subject:Computer Science
Abstract/Summary:
Since the pioneering work of McCulloch and Pitts from early 1940-ies and introduction of the concept of artificial neuron, numerous attempts aiming to automate the process of training neural networks have been made. Neural networks, though successfully applied in many different areas still bear significant convergence problems with respect to adequate choice of network parameters, architecture, initial set of weights, and other governing parameters.; The goal of dissertation was to develop different approaches in iterative search to achieve fast and robust convergence. Also, the goal was to achieve robustness without paying a price in speed and develop computationally less intensive algorithms which facilitate effective software and hardware implementation.; Various improvements to existing techniques of robust gradient search have been proposed. Those improvements encompass neural network architecture manipulation, such as network compression and automatic last layer training, as well as enhancements to robust gradient search and a synergistic approach of gradient and evolutionary algorithms. These improvements also include random partial gradient probing, overdetermined pseudo-inverse random gradient search, random feed forward processing, alpha gradient jump, and parameter adaptivity. Herein, these ideas were combined forming new algorithms.; Expected results, namely quick, robust, and computationally less intensive algorithms, have been proven experimentally.
Keywords/Search Tags:Robust, Neural, Network, Training, Algorithms
Related items