Font Size: a A A

Dynamic Learning Algorithms Of Neural Networks For Large-scale Data Sets

Posted on:2015-01-06Degree:MasterType:Thesis
Country:ChinaCandidate:F QinFull Text:PDF
GTID:2268330425484730Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Classification accuracy and learning speed are two main items for evaluating the performance of classifiers. Multilayer perceptrons (MLPs) with sigmoid activation functions are effective for implementing the small-sample, low-dimensional and limited-class learning tasks. However, the MLPs as well as the existing Back-Propagation (BP) algorithms are not ideal on performance for carrying out the large-scale classification tasks. Therefore, the main work of this thesis is in the following aspects:1) The conventional BP algorithm for MLPs with the standard sigmoid activation function is slow in convergent rate and easily got into local minimum. The influences of strength and grade factors of general sigmoid activation functions are analyzed on learning speeds and classification accuracies, and the appropriate ranges of the parameters are given.2) A type of modular MLP is proposed to solve the classification problem of datasets with comparatively multiple classes. The one-against-one (OAO) task decomposition method often brings about too many MLP modules, which not only leads to long learning time and complicated structures, but also to low classification accuracies. On the other hand, the one-against-all (OAA) decomposition method will result in serious imbalanced problem. A learning strategy of adding virtual samples to the minority class is presented to address the serious imbalanced datasets.3) The usual mode of reading all samples in leads to long learning time while there are too many samples. In essence, the BP learning process is to find the decision boundaries by means of iterations, and only a small part of samples near the boundaries have a relatively important role to the optimal boundaries. Therefore, an MLP as well as the algorithm can get the same or similar performance by learning only a small part of samples near the boundaries as learning the original large-scale samples. As a result, a dynamic learning algorithm is proposed to speed-up the learning process.4) The conventional BP algorithm takes a long time to find the complicated decision boundaries when the between-class samples are close or overlapping, i.e. the class margins are very small. In order to accelerate the learning process, we propose a mixed feature coding system to map samples from an original lower-dimensional space to a higher-dimensional feature space and thus enlarge the margins as much as possible on the premise that the internal neighborhood relationships are or nearly preserved.We mainly take such three larger datasets, i.e. Letter, Shuttle, MNIST handwritten digit recognition, as the application object. The experimental results show that the proposed modular MLPs as well as the dynamic learning algorithm have not only fast learning speed but also good generalization performance.
Keywords/Search Tags:Multilayer-perceptrons, Dynamic learning algorithms, Modular, Featurerepresentation, Sample selection
PDF Full Text Request
Related items