Font Size: a A A

Research On Illumination Invariant Extraction Algorithms For Face Recognition

Posted on:2011-03-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y ChengFull Text:PDF
GTID:1118330335986517Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Face recognition, a non-contact and friendly biometric identification technology, has broad application prospects in the military, public security and economic security. Face recognition has become one of research focuses in the fields of pattern recognition, image processing, computer vision, cognitive science and neural networks. In recent decades, a variety of face recognition methods have been proposed, and many automatic face recognition systems have been produced by domestic and foreign research institutions. However, test results of FERET and FRVT show that varying illumination would seriously affect performance of face recognition. In order to weaken or eliminate the problem, this paper focuses on how to extract illumination invariants (i.e., illumination insensitive features) from vision model, multiscale geometric analysis, the Lambert illumination model and illumination insensitive features, and proposes some illumination invariant methods. Its main works and research results are as follows:Recently, the model processing image information via human vision system is concerned by image processing, image understanding and pattern recognition. In 2007, Meylan et al. modeled human retina model by Naka-Rushton function, and propose a method for image local contrast enhancement. In 2009. Vu et al. applied image local contrast enhancement into face recognition with varying illumination, and developed an algorithm based on retina modeling. In this paper, we find that Meylan et al. only consider neighborhood pixels' geometric closeness without their photometric similarity when estimating the local illumination. This produces inaccurate local illumination estimation in some edges and texture regions of an image, which would cause distorted result in the subsequent process of local illumination compression. To solve the problem, this paper introduces bilateral filtering into human retina model, and proposes an algorithm for extracting illumination invariant based on bilateral filtering and human retina model. The experimental results are satisfying.Methods for extracting illumination invariant based on illumination model in illumination face recognition mainly use the simple Lambert illumination model. Lambert illumination model is a kind of classic empirical model. The model assumes that object has a Lambertian surface reflection characteristics, that is. when the light shines to the object, its surface in all directions has the same scattering, and diffuse scattering component only concerned with the surface and the incident angle of light source regardless of the observer position. A gray image F meeting with Lambertian surface reflection characteristics can be described as:F= R×I. where R is inherent characteristics of the image, depending on the object' reflectivity and surface normal, and I is related to the lighting source. Generally, to extract R by the simple Lambert illumination model, there have hypothesis that I changes slowly and R varies abruptly. On the basis of the illumination model, researchers propose many methods, for example. MSR (Multi-scale retinex), SQI (Self Quotient Image) and MFSR (Multiscale facial structure representation), etc. MSR and SQI getting the smoothed image by using weighted Gaussian filter make it difficult to keep sharp edges in low frequency illumination fields and can not estimate accurately illumination component. Hence, those methods can not extract illumination invariant accurately. MFSR first performs logarithm transform on original face images, then obtains the smoothed image from the Log-domain image by wavelet denoising model, and finally extracts illumination invariant by the difference between the Log-domain image and the smoothed one. It achieves good experimental results. However, the wavelet transform as an isotropic multiscale analysis can only describe point-like singularity, and is powerless to express linear singularity liking contour and texture. Therefore, MFSR will face strong pseudo-Gibbs phenomenon, and can not obtain accurate illumination invariant. To address the deficiencies, this paper studies multiscale geometric analysis, and proposes an illumination invariant algorithm based on nonsubsampled contourlet transform and adaptive noise reduction technology of Normal Shrink and an improved SQI algorithm. Experimental results show that the proposed algorithms not only improve visual effects of illumination invariant features, but also improve accuracy of face recognition.Image profile, major high-frequency information, is little affected by light changes. It includes most of the information in an image, and is an important intrinsic characteristic of the image. Contour feature of an image has received extensive attention in image processing and patter recognition, and has been used for stereo matching, image stitching, image retrieval and image recognition, etc. Human visual cortex receptive fields are characterized as being localized and directional, and an effective image representation method should be based on multi-direction and multiscale. Therefore, this article first studies contour information of an image and proposes an illumination invariant feature (Multiscale Principal Contour Direction, MPCD) of an image, then performs contour analysis based on nonsubsampled contourlet transform and constructs multiscale multi-orientation contour information (complex information), and finally gets MPCD of an image by its definition. Experimental results show that MPCD is an illumination insensitive feature.In recent years, characteristics based on image gradient analysis are applied to image segmentation, image recognition and dynamic target tracking. Researchers point out that image gradient direction is an important gradient feature being insensitive to varying lighting, and has been used in face recognition under complex illumination. Recently, Gradientfaces first gets image gradient field via performing convolution between an image and the first derivative of Gaussian function, then obtains image gradient direction in the above field. Gradientfaces achieves better results in face recognition under complex illumination conditions. Inspired by image gradient direction and Gradientfaces, this paper proposes an illumination insensitive feature (Gradient Maximum Component Direction, GMCD). Experimental results show that GMCD is superior to Gradientfaces.
Keywords/Search Tags:Face Recognition, Illumination Invariant, Multiscale Geometric Analysis, Human Retina Model, Multiscale Principal Contour Direction, Gradient Maximum Component Direction
PDF Full Text Request
Related items