Font Size: a A A

Research Of Linear Discirminant Analysis Algorithm In Face Recognition

Posted on:2013-06-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y JiangFull Text:PDF
GTID:2248330371482747Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Linear Discriminant Analysis (LDA) is more commonly used in pattern recognitionalgorithm, the main idea is that minimize the within-class distance while maximizing theinter-class distance, then gets the optimal projection direction to produce the best classificationresults. Linear discriminant analysis algorithm because of its simple and effectiveness have beenwidely used in many areas, but there are still some limitations needing for research to improve thealgorithm itself. Small sample size (SSS) problem due to the number of samples in the trainingsample library is less than the dimensionality of the samples, the distance between the samplesbecomes larger which makes the distance measure fails, and the within-class scatter matrix andbetween-class scatter matrix in the LDA algorithm become singular, can not get optimalprojection direction. SSS is particularly prominent in the field of face recognition. Currently themain problems affecting the linear discriminant analysis algorithms recognition results in the fieldof face recognition are illumination variations, facial expressions variations and other externalconditions. Illumination, facial expressions and other variations will bring about big change inimage pixel values, then cause non-convex complex distribution of facial image. Under theconditions of illumination, expressions and other variations, the performance of appearance-basedrecognition algorithms using linear feature (such as LDA) will fall off, this is the prevailingproblem in face recognition.In this paper, in-depth study of linear discriminant analysis algorithms in subspace, and themajor contributions are as follows:1. On the basis of in-depth studying of the basic principles of linear discriminant analysisand researching the status quo and the problems faced, prove that the two-dimensional lineardiscriminant analysis and extended two-dimensional linear discriminant analysis are using thecomplementary information by the theory.2DLDA equivalent to column-based LDA, after2DLDA the variance information is between the columns of the variance of the same imageinformation, which does not include image information between the different columns. E2DLDAequivalent to row-based LDA, after E2DLDA the variance information is between the rows ofthe variance of the same image information, which does not include image information betweenthe different rows. After2DLDA and E2DLDA, the same variance information is between the two corresponding pixels of image information. Besides it, the rest variance of2DLDAdiscriminant information is the information between different pixels on the same column, the restvariance of E2DLDA discriminant information is the information between different pixels on thesame row, which is completely different variance information. From the perspective of the imagematrix,2DLDA uses inner-class and intra-class covariance information in the vertical direction,and E2DLDA uses inner-class and intra-class covariance information in the horizontal direction.Evidenced that the discriminant information which2DLDA and E2DLDA used has a certainconsistency and complementarity of information, provides theoretical foundation for fusing classinformation in the two directions to design classifier and improve recognition performance of theclassifier.2. Two-dimensional LDA does not only avoid the problem of small sample size problem,compared with one-dimensional approach, can use the image matrix spatial structure informationto estimate more accurate within-class scatter matrix and between-class scatter matrix, and thecomputational complexity is much lower. Moreover, this paper has proved that2DLDA andE2DLDA have complementary discriminant information by theoretical analysis, this providestheoretical foundation for fusing class information in the two directions to design classifier andimprove recognition performance of the classifier. Then this paper proposes a face recognitionalgorithm, called Resampling Bidirectional Two Dimensional Linear Discriminant Analysis(RB2DLDA), which fuses complementary feature in the two directions by making use ofresampling technique, then on the basis of RB2DLDA gets an improved algorithm which fusescomplementary feature in the two directions by using Adaboost technique, called AdaboostBidirection two dimensional linear discriminant analysis (AB2DLDA). Aiming at recognitionproblem caused by variations in illumination, facial expressions etc and small sample sizeproblem which generally exist in face recognition, the advantages of the two novel recognitionmethods which fuses the complementary discriminant information in the two directions are asfollows: Always in two-dimensional space, Computational complexity is lower, and keeps thestructure on the relationship between the image space; Maintain the horizontal and verticaldirections of the discrimination information simultaneously, use the complementarydiscrimination information to avoid one-sided caused by only using one single directiondiscrimination information, thus ensure the robustness of the algorithm; full use of existingsamples, and improve the recognition performance of classifier.3. Combined with eigenvalue matrix and eigenvector matrix this paper proposes a adaptivedimensionality reduction parameter setting algorithm. Dimension reduction parameter settingsdirectly affect the recognition results, the number of reduction dimensions must be sufficient in order to ensure the recognition rate, while too many dimensions may have negative effects onthe classification performance. Setting dimension reduction parameter using contribution ratehas simple and intuitive advantages, and the disadvantages are: Use eigenvalue matrix todetermine dimension reduction matrix parameters, while not used the eigenvector matrix thathas the information of two-dimensional image matrix; Need to use the experience to set thethreshold of contribution rate, the experimental results has a close relationship with threshold.This paper proposes an adaptive dimension reduction parameter setting method by usingeigenvalue matrix and eigenvector matrix at the same time, the main idea is: Descending ordereigenvectors by the corresponding eigenvalue, the eigenvectors matrix of each column vectoras a row vector of the original sample weight, the rows which have larger weights of absolutevalue play an important role in recognition. The dimension reduction parameter settingalgorithms which use eigenvalue matrix and eigenvector matrix at the same time takeadvantage of the eigenvector matrix contains the weight value information; and does notrequire the threshold set to achieve the adaptive algorithm4. The experiments tested on the AR and CAS-PEAL-R1face database show that under theconditions of illumination, facial expressions variations, AB2DLDA and RB2DLDA recognitionalgorithms possess higher recognition accuracy and robustness than other state-of-the artmethods, and AB2DLDA also show the stability better than RB2DLDA. And designedexperiment for proving the complementary of2DLDA and E2DLDA, which proved RB2DLDAand AB2DLDA algorithms take advantage of this complementary information to improverecognition performance accuracy.
Keywords/Search Tags:face recognition, resampling, Adaboost, bidirectional two dimensional linear discriminantanalysis(2DLDA)
PDF Full Text Request
Related items