| Supervised learning has achieved remarkable results in computer vision,natural lan-guage processing,recommendation systems,and other applications.However,the devel-opment of supervised learning technology relies on a large amount of labeled data,but it has the problem of noisy labels.The label noise is divided into three types:random label noise,label space label noise,and sample space noise.This paper mainly focuses on the label noise problem under the label space label noise.This paper first starts with the robust loss function or the adjustment direction of the loss function,and introduces the assumption of imperfect supervised learning,that is,in the process of supervised learning,because the network directly conducts back propaga-tion training on the noise label,the noise label destroys the reasonable representation and correct measurement relationship of the sample under the network mapping.Under the as-sumption of imperfect supervised learning,we propose a reliable representation learning framework,which includes anchored representation learning,confidence representation learning,and exploratory representation learning.Anchored representation learning is mainly aimed at the deviation representation under imperfect supervised learning,and the feature representation obtained under unsupervised learning is used to constrain the fea-tures obtained from imperfect learning,thereby playing an anchoring role.The confidence representation learning uses the features obtained from the unsupervised learning in the anchor representation learning to calculate the threshold,so as to be used as a selector to screen samples and determine the positive and negative samples(whether they are noisy samples).And use unsupervised learning to obtain a reliable measurement relationship to correct the measurement relationship under imperfect supervised learning.In the ex-ploratory representation learning,the definition of the noise label in the previous step is further refined.Considering that it may make a misjudgment and lose the effective super-vision information,the effective information of the noise sample is carried out twice with the help of the meta-learning method which can use gradient and physical angle to correct the label.On the other hand,in order to further alleviate the problem of imperfect super-vised learning from the aspect of sample selection,this paper tries a symmetrical learning training framework.From the perspective of sample selection,it is used to update the network after screening,and as the cleanliness of the training data increases,Can further alleviate the dilemma of imperfect supervised learning.It mainly uses two models of the same structure.In each batch of data,each model selects samples with small loss as useful knowledge,and transfers these useful samples to another model for further training.In this paper,we have conducted a sufficient comparison with the experiments on the robust loss function for noise,and demonstrated the hypothesis of imperfect supervised learning proposed by us,and demonstrated the reliable representation framework and the training framework of symmetric learning for imperfect supervised learning.In terms of data set,we generate new data sets on three gradually increasing difficulty data sets based on the two modes of noise ratio and noise type,covering different noise ratios and noise types,sufficient experiments verify that our proposed method effectively alleviates the problem of imperfect supervised learning. |