Font Size: a A A

A study of the accuracy, completeness, and efficiency of artificial neural networks and related inductive learning techniques

Posted on:2002-05-02Degree:Ph.DType:Dissertation
University:Iowa State UniversityCandidate:Carmichael, Craig GarlinFull Text:PDF
GTID:1468390011498846Subject:Computer Science
Abstract/Summary:
Artificial Neural Networks (ANNs) have been an intense topic of research in the last decade. They have been viewed as black boxes, where the inputs were known and the outputs were computed, but the underlying statistics and thus reliability of the networks were not fully understood. Because of this, there has been hesitation in utilizing ANNs in automated systems such as intelligent flight control. This hesitation is diminishing, however. Individual elements of a neural network can be probed and their decision-making power assessed. In this study, a neural network is trained and then various ranking methods are used to assess the importance (saliency or decision-making power, DMP) of each input node. Then, the input data is renormalized according to the DMP input vector and fed to a general regression neural network (GRNN) for training. The accuracy of the DMP ranking methods are then compared against each other from the resulting modified GRNNs. Five ranking methods are tested and compared on four separate data sets. A series of new methods are then introduced that combine the global nonlinear regression capability of ANNs with the local averaging capability of nearest neighbor approaches, based on a weighted distance metric (WDM) provided by the saliency estimates. Two new neural stacking methods are introduced that rely on this WDM. A framework for quantifying error estimation reliability is presented and discussed. Using this framework, the predictive accuracy of MSA and DCM are compared in terms of both the modeled target function and the model's confidence interval about it using a new measure called the confidence coefficient. A benchmark problem is also introduced as a generic data set for future comparison between inductive learning machines. In addition, the Scaled Conjugate Gradient algorithm (SCG) is implemented for its potential in supervised learning. Two new complexity-regularization methods derived from SCG are implemented that use saliency estimates of various features of the ANN, and are driven by feedback from the cross validation (feedback) set.
Keywords/Search Tags:Neural network, Accuracy
Related items