Font Size: a A A

Research And Implementation Of Live Detection Network Based On Deep Learning

Posted on:2021-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:W B FuFull Text:PDF
GTID:2428330605954172Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Deep learning has achieved remarkable results in the field of facial live detection.However,when dealing with face images under complex conditions such as occlusion,illumination,improper angle,AI face changing,it is difficult to predict a large number of facial feature points,which makes the accuracy of live detection poor.In order to improve the accuracy of living detection,the existing solutions are mainly divided into regression model based methods and deep learning methods.Generally speaking,the theoretical basis of deep learning comes from the method based on regression model,but the former has made a huge improvement and optimization to the latter.Deep learning replaces the complex process of building vector model by machine learning,and achieves better prediction results.However,in the practical application of deep learning,due to the influence of various interference factors,the accuracy of in vivo detection will be poor.Therefore,in this paper,the methods based on regression model and neural network model based on deep learning are studied.Aiming at the improvement of several neural network models based on deep learning,a double-layer neural network model based on VGG-Res Net is designed to improve the accuracy of living detection.The main contributions are as follows:(1)In order to improve the accuracy of facial feature point location under the influence of interference factors,a network model based on C-Canny algorithm and improved VGG is designed in the first layer.This layer network is used to realize the function of face alignment and facial feature point location.First of all,in the face alignment stage,the c-canny algorithm is used to relocate the face area.Then,the face image is input into the improved VGG network model to locate the facial feature points.Finally,the results of facial feature points are input into the second layer of living detection network model designed in this paper to achieve the function of face similarity analysis and facial expression frequency statistics.(2)In order to improve the accuracy of living detection,this paper designs a living detection network model in the second layer to process face video stream files.This layer of network combines Res Net-34 network model,background image principal component analysis(PCA)and improved ear algorithm to extract facial features.Finally,the function of in vivo detection is realized based on logistic regression.The specific process is as follows: first,input the set of facial feature points obtained from the first layer neural network to Res Net-34 for facial similarity comparison,and realize the function of authentication.Then,theear scale factors of these feature points are calculated,and the blink frequency and opening and closing frequency are recorded respectively.The PCA background image of the input original image is analyzed,and the optimal threshold value is selected by the difference of the vector distance between the positive and negative samples,so as to judge the authenticity of the face image.Finally,the facial feature information obtained above is used as the input parameter of the living body determination layer,and the living body detection is carried out based on logical regression.(3)In order to improve the success rate of power users during peak detection,the double-layer living detection network designed in this paper has also completed the application deployment in engineering practice.Its workflow is summarized as follows: first,the first layer feature point location model obtains the face image based on the canvas label,and carries out face alignment and feature point location.After that,the location results and the original image are input into the second layer network model,and a series of facial features are extracted to achieve the function of living detection.To sum up,this paper has carried out a lot of research on some existing network models,and put forward new solutions to its shortcomings.Experiments show that the network model designed in this paper can locate facial feature points in complex conditions.When processing the face video stream file,it has a high accuracy of in vivo detection.On the 300-V and 300-VW open source datasets provided by ibug website,compared with some existing neural networks,the loss function value of facial feature point location is reduced by about 20%.In terms of the accuracy of in live detection,the positive sample rate is reasonable.
Keywords/Search Tags:live detection, improved VGG network model, C-Canny algorithm, deep residual network
PDF Full Text Request
Related items