Font Size: a A A

Face Recognition Based On Support Vector Machine

Posted on:2008-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:X M XuFull Text:PDF
GTID:2208360212486582Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
As a focus and difficulty in the pattern recognition and image processing fields , face recognition (FR) has an important significance, either for its widely practical application.Facial image is a high-dimension space, and face images is enormous, So that it is difficult do with so much large quantity in reality. How to extract primary features from face images is a key problem. The K — L method is that one kind of the characteristic using less quantity is in progress to the sample book but describing that to reach the method reducing the characteristic space dimension , can adopt K—L to shift the staple gaining the person face image as a result, the image dimension reduces greatly , but main original image information keeps messenger person face invariable. Adopt different classification implement to carry out classification and comparison on person face and then, to get the method having an effect to distinguish person face rapidly.Further more, Human faces have large variation in shape at different time Many influences, such a slighting, background, facial expression sand facial details, will easily affect the recognize result. Collecting face images are limited by the practice, such as time, place, background, quantity. Compare with the face image vector dimension, FR is a small sample problem. When solving this small sample problem with high dimension and nonlinear, many traditional pattern recognition methods will tend to occur over-fitting phenomenon. Support vector machines (SVM) is specially devised to solve the over-fitting problem and the small sample problem. Based-on the structural risk minimization(SRM) principle in the Statistical Learning Theory (SLT), SVM selects the optimal separate hyperplane as the separate function. The optimal separate hyperplane is the hyperplane that either correctly separate the sample set or get the biggest margin between two classes. Thus the separate problem can be formulated as a quadratic optimization problem satisfied simple restriction. The quadratic problem has singular global maximum point. By introducing the kernel function, the nonlinear separate samples are projected into a high dimension space (so call "feature space" ). New separate problem is solved in the linear separate feature space. Applied the kernel function, the compute complexity is not enhanced. Using different kernel functions, SVM works as the same as some traditional pattern recognition methods. It has been the preference classifier at present.In this paper, we employ several approaches into our facial feature extraction and recognition task which use face detection result as the input data. For approaching this task, we use the K-L transform to extract face image feature, then classify them based on SVM and KNN. And carry out comparison on the two kinds of classified method. By this method, it can translate things from multi-class into double-class and overcome the obstacles of some traditional methods. We combined SVM and PCA in a very tactful way to achieve excellent learning and generalization performance which you can see later though our experiment on ORL face database and face database which was collected by myself. After that we can prove that recognition result demonstrates the effective of the method. I designed that a simple people face distinguishes system in the end.
Keywords/Search Tags:Face Recognition, Statistical Learning Theory, Support Vector Machines, Optimal Separate Hyperplane, Kernel Function, Primary Component Analysis
PDF Full Text Request
Related items