| At present,face verification technology has been widely used in various fields,such as monitoring systems and access control systems.Early face verification studies were based on photographs taken under fixed conditions such as ID cards,so the results were very good.However,the photos in the actual scenes are more complicated,and are often influenced by blur,occlusion,and illumination,which results in loss of image information,and makes face verification more difficult.Therefore,the research of this thesis is based on the face images in the actual scenes rather than the images taken under fixed conditions.In order to make facial features more discriminative,some loss functions,such as A-Softmax loss,have been proposed recently.However,these loss functions do not solve the problem that hard samples contribute little to training losses.To solve this problem,this thesis proposes angular focus softmax(AF-Softmax)loss by introducing a focusing factor on the basis of A-Softmax loss to lessen the losses attached to those well-classified examples,thus improving the verification accuracy.Traditional face verification requires a threshold to determine whether two photos belong to the same person,but the thresholds required to verify faces shot in different scenes are different,which brings inconvenience to face verification in actual scenes.To solve this problem,this thesis uses the method of deep learning to train a metric network.In order to improve the performance of the metric network,this thesis rationalizes the network structure and studies the fusion of feature vectors.Furthermore,this thesis once again introduces the idea of focal loss into the solution.The purpose is to make the training pay more attention to the hard samples to improve the accuracy of the model.The proposed scheme has achieved significant results in LFW,YTF and private datasets. |