Support Vector Machine (SVM) is one of the learning methods developed from Statistical Learning Theory (SLT). It prodigiously solves many problems encountered by machine learning methods, such as model selection, overfitting, nonlinear and dimension curse in high dimension. Thus it is very popular in machine learning research. This thesis mainly studies algorithms for training Support Vector Machine.Training a Support vector machine (SVM) for a large-scale sample set is a time-consuming problem. Based on the analysis of the distribution of sample sets' geometric characteristic, this thesis proposes a so-called Quasi Choosing (QC) algorithm. QC trains a small sample set and gets a Quasi Optimal Hyper-plane. Then QC can use the hyper-plane to eliminate a large number of samples and consequently decrease the time for training SVM. The experiment proves that using QC, SMO can usually make the training time decrease by 50 percent, while not decreasing the prediction precision.Based on Quasi Choosing algorithm, the paper further proposes an incremental Support Vector Machine Learning algorithm which is a so called QC-ISVM. The algorithm takes full advantage of historical training results to improve the future training precision, and meanwhile it can cut incremental sample set. The experiments demonstrate that QC-ISVM can approach a training precision using entire training set. When incremental sample set is large, the algorithm can obviously decrease training consume. |