Font Size: a A A

Localization of feature space in SVM

Posted on:2005-03-22Degree:Ph.DType:Thesis
University:The University of IowaCandidate:Ryu, Gi-yungFull Text:PDF
GTID:2458390008490481Subject:Engineering
Abstract/Summary:
Support Vector Machine (SVM), a pattern classification algorithm in statistical learning theory, has been developed by V. Vapnik and his team at AT&T Bell Labs.; The main idea is to map the original set of vectors into a higher dimensional feature space, and then to construct in this space a linear decision rule (a "separating hyperplane").; Training a support vector machine is equivalent to solving a linearly constrained quadratic programming problem in which the number of variables is equal to the number of data points. This optimization problem is known to be challenging when the number of data exceeds a few thousand.; In this thesis, the feature space is partitioned using prior information or statistical information of the data and for each partition, the support vector learning algorithm is applied. By doing so, we can decrease the learning time and get some valuable information about the data, namely, how is the accuracy different with respect to the partitions.; Previously, the training and testing in Support Vector Machine has been done globally, yielding a single estimate of the accuracy of the model. But localization of the feature space allows estimates of the accuracy to vary, depending upon the localization in feature space of the data to be classified.; A new method is developed by using localization of the feature space together with greedy algorithm. For some data set it is shown that the accuracy is increased statistically significantly with this method.
Keywords/Search Tags:Feature space, Vector machine, Support vector, Data, Localization, Algorithm, Accuracy
Related items