Nowadays,deep learning has been successfully applied in various fields such as computer vision and image recognition,among which large-scale data sets play a key role.However,accurate and complete annotation of these data sets is difficult to achieve.Typically,data sets can be annotated online through crowd-sourcing companies or the Web.Although this method can speed up labeling,it is easy to produce labeling noise due to labeling errors.If manual labeling is used,it will time-consuming and increases the cost.Therefore,how to carry out image classification robustly under the condition of label noise is an important subject.Based on this,this paper proposes the following three methods:Firstly,to solve the problem that a single neural network is prone to accumulate label noise error stream,a regularized robust classification model of joint neural network is proposed.In this model,two independent deep neural networks with the same structure but different initialization form a joint deep neural network.By adding regular term constraints,the two peer networks select small-loss samples to update network parameters with each other and filter different error streams.Finally,two networks with different learning abilities are trained,and their output prediction results are more and more similar,while effectively fighting against label noise.Secondly,the supervised deep learning model relies on sample labels,and the accuracy of supervised learning will be affected when labels are noisy.In order to reduce the model’s dependence on sample labels,a semi-supervised robust classification model based on contrast learning is proposed.Contrastive learning,by mining the image feature representation of the data set itself,compares the similarity between positive and negative sample pairs,pulls the similar images closer,and pulls the different kinds of images farther apart.In the semi-supervised method,noise label samples are treated as unlabeled samples,and pseudo-label is given to them for training,which further improves the generalization ability and robustness of the model.Finally,deep neural network has memory,and it is easier to learn clean samples in the initial training period.As the training progresses,it will remember all the label noise samples,resulting in over-fitting.In order to combat network memorization and prevent over-fitting,this paper proposes to use contrast learning(Smi CLR)pre-training model parameters to initialize deep neural network,extract higher quality feature representation at the initial training label,reduce the confidence of the network to the sample of label noise,so as to improve the ability of the network to combat label noise. |