Font Size: a A A

PSO pour l'apprentissage supervise des reseaux neuronaux de type fuzzy ARTMAP

Posted on:2007-02-23Degree:M.IngType:Thesis
University:Ecole de Technologie Superieure (Canada)Candidate:Henniges, PhilippeFull Text:PDF
GTID:2448390005967397Subject:Artificial Intelligence
Abstract/Summary:
The impact on fuzzy ARTMAP neural network performance of decisions taken for batch supervised learning is assessed through computer simulations performed with different pattern recognition problems, like the hand writing numerical characters problem. To do so, we will study the impact of many characteristics on this neural network, such as: the training set size, the training strategies, the normalisation technique, the overlapping structure, the MatchTracking polarity and the impact of the fuzzy ARTMAP parameters. By allowing this network to learn real and synthetic data under various conditions, the extent of performance degradation is compared in terms of generalisation error and resources requirements.; Degradation of fuzzy ARTMAP performance due to overtraining is shown to depend on factors such as the training set size, and the number of training epochs, and occur for pattern recognition problems in which class distributions overlap. As an alternative to the commonly-employed hold-out training strategy, a strategy based on Particle Swarm Optimization (PSO), which determines both network parameters and weights such that generalisation error is minimized, has been introduced.; Through a comprehensive set of simulations, it has been shown that when fuzzy ARTMAP uses the PSO training strategy it produces a significantly lower generalisation error than when it uses typical training strategies. Furthermore, the PSO strategy eliminates degradation of generalisation error due to overtraining resulting from the training set size, number of training epochs, and data set structure. Overall results obtained with the PSO strategy highlight the importance of optimizing parameters (along with weights) for each problem, using a consistent objective function. In fact, the parameters found using this strategy vary significantly according to, e.g., training set size and data set structure, and always differ considerably from the popular choice of parameters that allows to minimize resources.; The PSO strategy is inherently a batch learning mechanism, and as such is not consistent with the ARTMAP philosophy in that parameters cannot be adapted on-the-fly, through on-line, supervised or unsupervised, incremental leaning. Nonetheless, it reveals the extent to which parameter values can improve generalisation error of fuzzy ARTMAP, and mitigate the performance degradation caused by overtraining. To the best of our knowledge, it is the first time that a training strategy is developed for optimizing the four parameters of fuzzy ARTMAP neural network.
Keywords/Search Tags:Fuzzy ARTMAP, Training, Strategy, Parameters, Generalisation error, Pattern recognition problems, Performance, Data set structure
Related items