Font Size: a A A

Acceleration And Application Of Support Vector Machines

Posted on:2011-07-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:K K CaoFull Text:PDF
GTID:1118330332984023Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Support vector machines (SVMs) are state-of-the-art machine learning methods based on statistical leaning theory and structural risk minimization. They perform particularly well in solving non-liner and high dimensional pattern classification and function estimation problems with small sample sizes. The drawbacks of traditional machine learning methods such as over-fitting, local minima and curse of dimensionality are overcome by SVMs. Because of their remarkable performance, SVMs have been getting a lot of attention since being proposed, and have become very popular in the field of machine learning. However, the amount of computation for SVMs is relatively large, leading to long training and testing time, which hinders their application and popularity in the industrial community. This paper tries to improve the speed of SVMs from several different aspects. The main contents are as follows:1) Incremental/decremental (on-line) training algorithmA multiple on-line training algorithm for support vector regression (SVR), which can perform incremental and decremental training at the same time with multiple training samples, is proposed to improve the previously proposed algorithm, which can only deal with one training sample at a time. This algorithm is based on the Lagrange multiplier method and the Karush-Kuhn-Tucker (KKT) conditions. During each iteration, the algorithm modifies the Lagrange multipliers of the updated samples while making sure the KKT conditions are fulfilled for all other training samples. The on-line training algorithm terminates when all training samples fulfill the KKT conditions. Experimental result shows that the proposed algorithm can train SVR model on-line effectively and is much more efficient than the single-sample-based on-line training algorithm. This algorithm is also faster than the batch training algorithm when the amount of training samples added or removed at one time is relatively small, thus can be used as an effective on-line training algorithm for SVR. It is especially useful in applications such as time series prediction and identification of time-varying systems.2) Hardware implementation of SVM To facilitate the application of support vector machines in embedded systems, a parallel and scalable digital architecture based on the Sequential Minimal Optimization (SMO) algorithm for training SVMs is proposed and tested based on FPGA platform. By taking advantage of this mature and popular training algorithm, the numerical instability issues that may exist in traditional numerical algorithms are avoided. The inherent parallelism of the SMO algorithm is extracted and mapped to multiple processing units. Experiment result shows that the proposed architecture can solve SVM training problems effectively with inexpensive fixed-point arithmetic and good scalability can be achieved. This architecture overcomes the drawbacks of the previously proposed SVM hardware that lack the necessary flexibility for embedded applications, and thus is more suitable for embedded use. We also proposed a methodology for hardware implementation of support vector machines based on the popular parallel computing model named MapReduce to facilitate the architecture design and improve reusability. We took both SVM training and classification algorithms as examples to show how these algorithms can be mapped to scalable architecture easily and effectively through this method.3) Application to lithography hotspot detectionA method to improve the efficiency of SVM in the application of lithography hotspot detection is proposed. Frequency domain features of integrated circuit (IC) layout samples are first extracted with discrete cosine transformation (DCT). Multi-objective genetic algorithm is then used to do feature selection, so that fewer features will be involved in hotspot detection and thus the detection speed can be improved. The optimization of SVM parameters is done jointly with feature selection in order to obtain the best detection precision as possible. Experimental result shows that the proposed method is much faster than the previous pixel-based detection method without compromise its precision. This detection method does not need any information about the process parameters or possible resolution enhancement technologies (RET), thus is suitable to be integrated into physical design flow for fast pre-detection of lithography hotspots.
Keywords/Search Tags:Support Vector Machine (SVM), Machine Learning, Incremental Training, Sequential Minimal Optimization (SMO), Field Programmable Gate Array (FPGA), Lithography Hotspot Detection, Genetic Algorithm, Feature Selection
PDF Full Text Request
Related items