Font Size: a A A

Face Recognition With Multiple Cues

Posted on:2014-11-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z C HanFull Text:PDF
GTID:1268330425477332Subject:Mechanical design and theory
Abstract/Summary:PDF Full Text Request
Biometric recognition is a recognition technology based on the inherent biometrics of human beings and is superior to the traditional identification technology in security and robustness aspects. Due to its unique advantages such as convenient data acquisition and the capability working under concealed environments without disturbing the detected objects, face recognition has drawn widespread attention and is a research hotspot now. Some technology breakthroughs had been made in the past two decades. The application prospect of face recognition is wide in the fields of public security, information security and financial security etc. However, due to the complexity and the radon variations of face images, there still is a long way to go to meet the people’s expectation. Upgrading the identification performance of face recognition is still an urgent and important task in developing technology; however, it is also a challenge research issue in pattern recognition discipline.Face recognition is a complex information processing. Face recognition in real application conditions is influenced by a variety of uncertain environment factors. In the cognition implementations, incorporating multiple effective feature information and theory methods will help system recognize and discriminate different objects comprehensively and accurately. This paper deals with the researches of face recognition technology with multiple cues. The main works and contributions are as follows:1. An approach of precise localization of eye centers with multiple cues is proposed. This approach searches the precise localization of eye centers from coarse to fine by three stage processing. In the first two stages, Viola-Jones approach is used for the rough localization of eye centers. The main contribution of the proposed approach is the precise localization processing of eye centers in the third stage. Gradient combination features and Curvelet features were constructed and used in precise processing, both features possess higher discrimination ability in revealing the intensity distribution and edge characteristics of the neighbourhood around eye center. A rebuilt error calculation mechanism is proposed and the rebuilt errors are used for evaluating the matching similarity between a test eye patch image and the pre-constructed reference eye patch image set. The final localizations of eye centers are selected based on integrating the rebuilt error results of gradient combined based features and Curvelet based features. The experiment results show that the proposed approach achieved high localization precision.2. A face recognition approach with feature fusion of Gabor based, Curvelet based and LBP based representations is proposed. Gabor, Curvelet and LBP are effective features successfully used for face representations. They reveal the intrinsic characteristics of the description object from different characteristics views such as general multi-scale waveform decomposition, multi-scale edge-oriented waveform decomposition and the micro texture of lightness differences. These features possess strong correlation and complementarity. In the proposed approach, the Gabor features and Curvelet features are first fused by canonical correlation analysis (CCA) on feature layer; then the matching similarity score of CCA fusion features and similarity score of LBP features are integrated on decision layer. The experiment results show that the proposed approach upgrades the recognition accuracy significantly.3. For the matching problem of multiple face images from one person to a single face image, a novel approach is proposed. The proposed approach first analyses the multiple face images, which were taken from same person, statistically. After select a feature representation, the mean feature values and the principle directions of feature data of multiple face images from same person can be calculated. The mean feature values and the principle directions will be taken as the common attributes of the specified person. Project a single test image onto the principle directions of multiple face images, and then generate a rebuilt image with limited number terms of the principle components. The difference between the original test image and the rebuilt image is called rebuilt error. The rebuilt error is taken as matching error and the match with smallest rebuilt error is assumed the best match in processing. The experiment results demonstrate the effectiveness of the proposed approach.4. For the non-frontal face image matching problem, a novel face pose estimation approach and two simplification processing operations are proposed. The two simplification processing operations are:aspect ratio modification and flipping image selection. In the face pose estimation process, the locally linear embedding (LLE) method is first used for the dimension reduction of features; then, sparse coding and dictionary learning methods are used for yaw degree classification. For the face image taking in side direction, the width size will be compressed, an aspect ratio (width/height ratio) modification operation is proposed in this paper. When the two matching images are oriented on opposite side and possess larger yaw angles, the matching error is larger if match the two images directly. A flipping image selection operation is proposed which is replacing one of these two images with its left-right symmetric flipping image in matching processing. The experiment results demonstrate that the proposed pose estimation approach is robust and the two simplification processing operations improve the recognition performance obviously.
Keywords/Search Tags:Face recognition, Eye center localization, Information fusion, Single vs.multiple images matching, Non-frontal face image processing
PDF Full Text Request
Related items