Font Size: a A A

Heteroscedasticity Discriminant Analysis In Automatic Face Recognition

Posted on:2011-02-04Degree:MasterType:Thesis
Country:ChinaCandidate:S B HeFull Text:PDF
GTID:2208330332978861Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Face recognition is a main orientation of biometrics, which is widely used in secure authentication systems, credit card verification, medical, archive management, video conferencing, human-computer interaction, public security system and becomes the hotspot in current pattern recognition and artificial intelligence fields. Face recognition is an identification technique. The face images are processed and analyzed by computer and then the discriminant features are extracted. Compared with other biometric recognition techniques, face recognition is more directly, friendly, convenience and easily accepted by customers.In face recognition, it is very important that the influence of lighting and expression changes are overcome, and discriminant features extraction is a key step to obtain higher recognition rates. Linear discriminant analysis (LDA) was proposed by RA.Fisher in 1936, which is widely used in pattern recognition and computer vision. The goal of LDA is that, the extracted features have best separability. LDA is related to the maximum likelihood estimation of parameters for a Gaussian model, and two assumptions on the model are satisfied. The first one is that all the class discriminant information resides in a lower dimensional subspace of the original feature space. The second one is that within-class scatter covariance matrixes are equal for all sample classes. Corresponding to the deficiency of LDA, the heteroscedastic discriminant analysis (HDA) is proposed. In HDA, it assumes that the within-class covariance matrixes are heteroscedastic and for each sample class a within-class matrix is defined.In this paper, based on the. character of within-class covariance matrixes of HDA are heteroscedastic, we make further studies on HDA:1) The samples are nonlinear mapped into a higher dimensional feature space via kernel method, and then the optimal projection matrix is extracted in the higher dimensional feature space by HDA, kernel heteroscedastic discriminant analysis (KHDA) is presented. In KHDA, kernel function is used to replace the complex nonlinear mapping function of kernel method, the nonlinear features of samples are ectracted and computional complexity is reduced.2) The two-dimensional matrixes of image samples are used for training directly, then two-dimensional heteroscedastic discriminant analysis (2DHDA) is studied and dimensional disaster and Small Sample Size Problem (S3 Problem) of HDA is overcome. 3) The within-class scatter matrixes of 2DHDA are redefined as the weighted summation of both within-class scatter matrixes of 2DLDA and 2DHDA, then weighted two-dimensional heteroscedastic discriminant analysis (W2DHDA) is presented. Via weighted factor is changed, more robust of within-class matrixes of W2DHDA is gained, which is the foundation of higher recognition rates are obtained.Experimental results based on ORL (Olivetti Research Laboratory), Yale face database, ORL and Yale mixture face database show that the validity of KHDA,2DHAD and W2DHDA in face recognition.
Keywords/Search Tags:Face Recognition, Heteroscedastic Discriminant Analysis, Kernel Heteroscedastic Discriminant Analysis, Two-Dimensional Heteroscedastic Discriminant Analysis, Weighted Two-Dimensional Heteroscedastic Discriminant Analysis
PDF Full Text Request
Related items