Font Size: a A A

Transfer Learning With A Few Samples For Face Identification

Posted on:2020-12-09Degree:MasterType:Thesis
Country:ChinaCandidate:Y CaoFull Text:PDF
GTID:2428330572456821Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Face recognition is a widely used research field.In recent years,transfer learning has shown its impressive performance on face identification.However,in most of them,the number of target domain samples always is large no matter whether they are labeled.In many real-world applications,there may be only scarce data we can use i.e.2 or 3 samples per subject,and directly training with them will lead to over-fitting using DCNNs.In addition,Face recognition has achieved remarkable success on constrained settings where faces have frontal bias because of the limitation of face detection algorithms.However,unconstrained face verification is still a challenging problem and IJB-A dataset is an important benchmark in this field.To address the problem that too few samples in the target domain,we propose a novel deep domain adaptation approach,Restricted Parameter Learning(RPL),to address the challenge of training with scarce target data by restricting the adaptation learning procedure,i.e.reserving the effective part of parameters learned from source domain and re-optimizing them in target domain.Specifically,we transfer the knowledge from source domain to target domain in very deep network,obtaining satisfying results and avoiding over-fitting or structure damage which may be caused by directly training on a few target samples.Furthermore,RPL can effectively be performed when the data distribution in the source and target domain is quite different.To better demonstrate our RPL performances,we test our method on both the challenging databases LFW and FERET for face identification.To address the problem that the performance of unconstrained face recognition based on transfer learning decreases,we propose a model called Deep Transfer Network(DTN)which integrates transfer learning and attention-based feature aggregation mechanism together.First,we train a light deep neural network structure ShuffleNet supervised by A-softmax loss function to extract unconstrained face features from VGG-Face dataset.Then,we apply triplet loss function to do metric learning and feature aggregation successively to improve the performance.Finally,face recognition results are obtained by support vector machine.Our DTN is more efficient than previous methods thanks to its compact representation.And our DTN can produce results comparable to the state-ofthe-art in the challenging face dataset,IJB-A.
Keywords/Search Tags:Face identification, Few samples, Transfer learning, DCNNs, Unconstrained
PDF Full Text Request
Related items