In recent years,with the rise of deep neural networks,artificial intelligence research has made remarkable progress.However,traditional deep neural networks face challenges when domains,which represent underlying data distributions,vary during the training and testing processes.This is because they are often based on the independent and identically distributed assumption and require a large number of labeled training samples.Furthermore,labeling a large number of datasets is a time-consuming and expensive task.To address these challenges,researchers have proposed the concept of domain generalization(DG).DG methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains.Mainstream domain generalization algorithms tend to design special changes in the general feature extraction network such as ResNet,or add more complex parameter modules after it.Modifying the components in it may weaken the strong feature extraction ability of the feature extraction network,which is well pre-trained on large-scale datasets.Adding more complex parameter modules leads to a deeper network,making the model more demanding on computing resources.In this paper,we introduce a feature transfer model based on Mixup and contrastive learning which doesn’t have any change in the general feature extraction network and additional modules after the general feature extraction network.We propose a new sample selection strategy to cooperate with contrastive loss and design a feature transfer model to make the features have more generalizability.Experiments show that our proposed strategy and feature transfer model work well and our method outperforms conventional domain generalization methods. |