Font Size: a A A

Multi-modal Image Fusion Based Face Recognition Algorithm Research

Posted on:2020-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:W D ZhuFull Text:PDF
GTID:2428330596476734Subject:Engineering
Abstract/Summary:PDF Full Text Request
Face recognition has received extensive attention in the field of computer vision over the past three decades in virtue of its large theoretical challenges and huge practical applications.With the development of multi-modal sensors,it is easy to capture multi-modal face images in real word.Due to the divergences of imaging mechanisms,the conventional single-modal face recognition algorithm can not handle multi-modal face images,which limits the further application of face recognition.Multi-modal data can provide complementary information that can improve recognition performance.The face recognition method designed to fuse multiple modal information has greater practical application than single-modal face recognition.Multi-modal face recognition is now highly valued because of its important role in economic,social security,criminal investigation,military,ect.In this paper,multi-modal face recognition tasks are mainly studied in the scenes of near-infrared-visible light and the scenes of multiple view angles.Because of the large divergences across modalities,generally it is considered that the data from different modalities come from different domains with different distributions.Therefore,data from multiple modalities cannot be compared directly.Compared with the single-modal face recognition method,the multi-modal challenge is to relate the multi-modal information and minimize the divergences across modalities.In this paper,an overall framework for multi-modal face recognition is designed to handle the challenge of multi-modal learning.This framework encompasses feature fusion and common subspace learning.This paper gives a feature fusion method for multi-modal.And this paper explores the correlation across the samples of inter-modality-but-intra-class,and puts forward reasonable assumptions based on the characteristics of multi-modal images.A face recognition scheme based on low-rank subspace learning is proposed.Furthermore,low-rank common subspace based joint sparse representation algorithm is proposed.The optimization of the algorithm and the rationality of its optimization are given.The alternative direction multiplier method is used to derive the optimization formula,and the final calculation flow of the algorithm is obtained.In this paper,the convergence experiment results show that the recognition rate of the proposed algorithm can converge to the equilibrium point faster,and the influence of the recognition rate caused by the parameter change is tested and analyzed.In the end,experimental results on two datasets,namely the near-infrared-visible face data set(HFB Database)and the multi-view face data set(CMU-PIE Face database)reveal that the proposed algorithm outperforms the existing state-of-the-art,even in three or more modalities.In summary,this paper has achieved the research objectives and effectively solved the problem of multi-modal face recognition.
Keywords/Search Tags:multi-modal fusion, joint sparse representation, common subspace, image fusion, face recognition
PDF Full Text Request
Related items