Font Size: a A A

Theory And Simulation Experiment Study On Learning Algorithms Of Neural Network And Support Vector Machine

Posted on:2004-08-02Degree:MasterType:Thesis
Country:ChinaCandidate:Q P LiuFull Text:PDF
GTID:2168360092981978Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
The traditional neural networks, BP networks, are subject to three hardly conquerable drawbacks in network training and network design a long time, including slow training speed, the training tending to sinking into local minimum and the trained networks having poor generalization capability. In this paper the reasons for these drawbacks and the methods for overcoming these drawbacks are systemically studied from two levels, algorithm level and computing theory level. In the algorithm level, currently various training algorithms of neural networks, including gradient algorithms, intelligent learning algorithms and hybrid algorithms, are comparatively studied; the optimization principle of BP algorithm for neural networks training is analyzed in detail, and the reasons for serious disadvantages of BP algorithms are found out, moreover, the optimization principle of two kinds of improved BP algorithms is described in a uniform theoretic framework; and the global optimization algorithms of neural networks, mainly genetic algorithm are expounded in detail, it follows that a improved genetic algorithm is proposed; finally the training performances of various algorithms are compared based on a simulation experiment on a benchmark problem of neural network learning, furthermore, a viewpoint that genetic algorithm is subject to "curse of dimension" is proposed. The studies indicate that the algorithm level only deals with getting over the former two drawbacks of neural network learning using advanced optimization algorithms in the intrinsic framework of neural network, and great breakthrough is hard to made because of the limit of current optimization theory. In the computing theory level, the reasons resulting in difficulty in neural network design are analyzed from the point of machine learning; statistical learning theory and regularization approach directing neural network design are systemically expounded; the most important is the theory of support vector machine, which is directly induced from statistical learning theory, is comprehensively expounded; at last the generalization capability of neural networks and support vector machines is studied through a simulation experimenton such learning tasks as function regression and system modeling. The studies of this level indicate that support vector machine can excellently overcome the overall three drawbacks of neural network learning. In conclusion, studying on the essence of machine learning from the point of computing theory and working out novel good-character learning machine, in order to avoiding the hardly surmountable handicaps of neural network learning, is a applicable way solving radically the problems of neural network learning. At last, a personal preview of further tasks in the research realms of neural network and support vector machine is presented.
Keywords/Search Tags:neural network, genetic algorithm, gradient algorithm, statistical learning theory, support vector machine, regularization approach
PDF Full Text Request
Related items