Font Size: a A A

Research And Implementation Of Face Recognition Optimization Technology Based On Deep Learning

Posted on:2020-05-16Degree:MasterType:Thesis
Country:ChinaCandidate:C YuanFull Text:PDF
GTID:2428330596975130Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the development of deep learning,the traditional computer vision tasks have been developing rapidly,technologies such as face detection,face recognition have been widely used in different industries.Although deep learning has greatly promoted the development of face analysis technology,its computing-intensive and memory-intensive features make it difficult to deploy the model on some embedded devices or mobile computing platforms.Take the state-of-art face recognition method ArcFace for an example,the size of the LResNet100E-IR model based on the method is about 250 MB,though it can achieve a great result of accuracy,yet deploying the model on the resource constrained devices is a difficult challenge to resolve.Therefore,the optimization of neural network performance on these resource-scarce devices or platforms has become a primary problem for researchers to solve.Considering the problems above,this article is focus on research based on the deep learning neural network optimization techniques in the embedded platform,we propose a Two-Stage Knowledge-Distillation method,improve the efficiency of our original compact models without increasing the complexity of the models,therefore we achieve goals of compressing and accelerating the models.Then we research and experiment on the field of face recognition and improve the face recognition models with our TSKD method.We verify our work with the datasets of face images like LFW and implement a face recognition SDK based on the deep neural network models trained with the above methods.The primary research work of this paper is as follows:(1)This paper first deeply researches on the mainstream compression methods of neural network models,analyzes current situation of the optimization technologies,compare and summarize differences,advantages.(2)We propose a method named Two-Stage Knowledge Distillation(TSKD).It introduces the concept of adapter layer network into traditional knowledge distillation and improves the bad effect of knowledge transfer due to structure correlation between the teacher network and the student network.Then we verify the method by the results of our experiments on the CIFAR-10 and CIFAR-100 datasets.(3)We research in the field of general face recognition,analyse the characteristics of face recognition problems,then improve the performance and efficiency of the MobileNet network by optimization and compression combined with the previous stated TSKD method for experiments on the LFW dataset.(4)In addition to general face recognition,this paper also focus on researching the special issue of low-resolution face recognition.We put forward a way named knowledge extraction and improve the accuracy of face recognition for face images in low resolution.We also optimize with the method TSKD,and we achieve to improve the executing effiency and memory footprint tremendously.
Keywords/Search Tags:deep learning, face recognition, embedded platform, deep neural network compression, knowledge distillation
PDF Full Text Request
Related items