Font Size: a A A

Support Vector Machine And Application Based On Expected Complexity Upper Bound Control

Posted on:2024-05-30Degree:MasterType:Thesis
Country:ChinaCandidate:H LiFull Text:PDF
GTID:2568307127453384Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Machine learning is an important direction in modern computer technology,which includes various mathematical theories and computer algorithm knowledge.In many applications,using machine learning technology can effectively improve efficiency and achieve good results,which has received widespread attention in various fields.In recent years,the support vector machine theory in machine learning has also become a research hotspot due to its good characteristics,and has been applied in various industries,mainly in the field of classification problems.The majority of application scenarios are specific situations such as character recognition,facial recognition,pedestrian detection,text classification,etc.Support Vector Machine(SVM)is a learning machine that can be used for classification.It can provide predictive models for limited data and apply them to the classification of unknown data in the future.Therefore,most of its application scenarios are classification scenarios.For most algorithms,complexity is a manifestation of their algorithm performance,and for traditional support vector machines,it is no exception.As a manifestation of algorithm complexity,the VC dimension(Vapnik-Chervonenkis Dimension)has become an indicator that can be used to measure the performance of support vector machines.In theory,low VC dimensions can enhance the generalization ability of support vector machines.However,for a series of classifier methods based on traditional support vector machines,the VC dimension always encounters various types of data in real-world applications.When processing these data,the upper bound of the VC dimension may be infinite due to its own characteristics.Although it has achieved good results in different experiments or applications,it has not been able to achieve good generalization ability,resulting in poor results when faced with limited data.Therefore,there is still significant room for improvement in the generalization ability of algorithms such as support vector machines.This article focuses on algorithms such as support vector machines,and optimizes and improves the VC dimension,which is a manifestation of algorithm complexity.Relevant research work is carried out to improve the generalization ability of the algorithm.The research content of this article mainly includes the following two aspects:(1)An improved LSSVM(Least Squares Support Vector Machines)algorithm is proposed.Based on the LSSVM algorithm,the upper bound of the VC dimension is minimized and the expected optimal projection scheme is found.Finally,it is incorporated into the LSSVM algorithm to classify the data.Compared to the traditional LSSVM algorithm,the generalization ability of the algorithm has been improved,and it has significant advantages on multiple datasets.(2)For the above improvement method,this article will also apply it to improve the SVM algorithm.By combining VC dimension with SVM method,the traditional SVM algorithm has been improved to minimize the upper bound of VC dimension and find its expected optimal projection,which is then incorporated into the traditional SVM method.Finally,the improved algorithm was applied to the classification and recognition of real medical images.From the recognition results of medical images,it can be seen that compared to traditional SVM methods,the improved method not only improves generalization performance,but also improves classification accuracy.
Keywords/Search Tags:Machine learning, VC dimension, Support Vector Machine, Classification, Least Squares Support Vector Machine
PDF Full Text Request
Related items