Font Size: a A A

Study On3D Face Recognition Against Expression Change

Posted on:2015-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:D W WangFull Text:PDF
GTID:2268330422472118Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the rapid development of the information technology, the demand of efficientand accurate personal authentication is growing. As an important part of the identityauthentication technology, face recognition has a high academic value and broad marketprospects, and has become an important research topic in the field of pattern recognition.According to different processing objects, current face recognition research is dividedinto two types: two-dimensional and three-dimensional face recognition. Thetwo-dimensional face recognition technology has been available to achievingsatisfactory recognition results under certain constraints. But pose, illumination,expression, age and other factors are the bottleneck of further development of thetwo-dimensional face recognition technology. However, three-dimensional data providemore identification information that face recognition needs, which has been thought tohave light, pose invariant. Thus it has become a hotspot in current research field of facerecognition. This paper mainly performs the following research:①To solve the increasing issue of within-class difference caused by the cusps,holes, pose varying of the3d face data, a novel3d face data pre-process scheme whichis enforced by segmenting, norming, smoothing, and pose correction is proposed. Thescheme could not only transform the scattered point cloud data into norm data, but alsoweaken noise and the interference of redundant information. Although the scheme isbased on the CASIA3D face database, its essential idea doesn’t lose generality.②In order to enhance the robustness of3D face recognition to expression change,a face recognition algorithm is proposed by fusing the depth data and the facial rigidregion in this paper. First, the tip point of nose is located in terms of the facial geometricfeatures, cut the effective facial region around the center of tip point, and transformedthe faces with different poses into the normalized front pose. Then match the globaldepth image using2D principal component analysis (2DPCA), and match the localfacial rigid region using modified iterative closest point (ICP). Finally, fuse thematching results based on global and local feature. Experiments on the CASIA3D facedatabase show that the proposed system has higher recognition rate than the system withsingle feature, and also a better robustness to facial expression change.③Aiming at the problem that mostly traditional algorithms generate depth imagefrom depth data through point cloud interpolation, which makes the time complexity increase, a novel scheme is proposed to avoid point cloud interpolation in this paper.④In order to solve the problem that segmenting the target area by the fixed radiusin the traditional rigid region extracting scheme, which may decrease the difference ofbetween-class, a new approach based on facial geometric structure is presented toincrease the difference of between-class of rigid regions. Locate the key points throughfacial geometric structure, and then extract the rigid region in terms of the radius whichautomatically gets by the key points.
Keywords/Search Tags:face recognition, feature extract, principal analysis, feature fusion, iterativeclosest point
PDF Full Text Request
Related items