Font Size: a A A

Research On Unsupervised Image Classification Method Based On Distillation Transfer Network

Posted on:2020-03-30Degree:MasterType:Thesis
Country:ChinaCandidate:E X ChenFull Text:PDF
GTID:2428330623967024Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the wide application of computer vision technology such as intelligent monitoring and automatic driving,image classification has been rapidly developed in recent years as the basic technology for such applications.Since the traditional image classification method cannot handle the Internet-level unlabeled data because most of the available images are unlabeled in real-world production.In order to solve this problem,unsupervised domain adaptation technique was proposed.As one of the latest achievements on unsupervised domain adaptation,deep transfer network successfully combines the advantages of deep neural network in feature extraction with the ability of domain adaptation to distribute matching.By matching the deep features between the source domain and the target domain,the predict accuracy is greatly improved compared to the traditional domain adaptation technique.However,due to its complex network structure,the complex relationship caused by noise data easily leads to over-fitting during transfer training.Due to the large differences between the source dataset and the target dataset,it is difficult to locate the optimal scale of the model if we use the standard dropout.Moreover,the deep transfer network lacks sufficient generalization ability in the domain transfer training.Although the category probability vector is used as the soft target of training,the probability matching of the correct category is still biased and the similar information between the categories is ignored,which will affect the matching performance of the final model.In our work,we proposed the distillation transfer network(Distilling-DTN)based on the deep transfer network.The main improvement of Distilling-DTN is the embedding of the distillation operation in the conditional distribution matching stage.The main principle is that as the temperature parameter increases,the probability vector distribution between categories can be more uniform,and the influence of the weak category probability on the model training can be improved.The class similarity information distilled at different temperatures is fused as an optimization target for conditional distribution matching.The model obtained by this operation has the ability to match the latent feature distribution between the source domain and the target domain,and further improves the classification performance of the unlabeled target data.In order to solve the problem of over-fitting during domain transfer,we proposed the multi-scale fusion dropout technology.The main idea is to pre-train several sets of network models with different scales,use genetic algorithm to simulate the optimal set of gated variables to obtain the optimal scale of each model,then use the optimal scale to scale down the corresponding network parameters to obtain the predictive sub-model.The predicted sub-models are fused according to the trained weights to obtain the final prediction model.Compared with the traditional dropout,the multi-scale fusion dropout can synthesize the feature information provided by each scale to make the output prediction result more reliable.In this thesis,multi-scale fusion dropout is applied to the distillation transfer network,which not only reflects the applicability of multi-scale fusion dropout in the transfer network,but also improves the ability of the distillation transfer network to suppress over-fitting.The integrated model was tested on a standard data set,and the accuracy rates on the MNIST/USPS,SVHN/MNIST,and CIFAR-10/CIFAR-100 data sets increased by 0.72%,1.19%,and 3.18%,respectively,which prove the effectiveness of the improvement of this thesis.
Keywords/Search Tags:image classification, dropout, domain adaptation, distilling-DTN, deep neural network
PDF Full Text Request
Related items