| The semantic segmentation training data requires pixel-level manual labeling,which is much more expensive compared to the other vision tasks,such as image classification and object detection.Semi-supervised methods have taken pseudo supervision as the core idea,especially the method of generating pseudo labels.The current research focuses on how to improve the quality of pseudo labels,help the model locate errors and maximize the use of unlabeled data.Aiming at the problem that the new knowledge learned by the model will cause "catastrophic forgetting" of the previously learned knowledge,this paper proposes to use the pseudo labels generated by the previous model to rehearse the previous knowledge to improve the quality of the pseudo labels generated by the current model.In addition,this paper proposes to use divergent models for mutual training to alleviate the problem that it is difficult for a single model to identify the error of the model itself.Based on those,this paper proposes a semi-supervised semantic segmentation algorithm based on dynamic mutual training and pseudo label enhancement.Experiments show that the algorithm is superior to the DMT algorithm,with a maximum improvement of +3.31(%).Aiming at the problem that it is difficult for a single model to locate its own errors,this paper proposes a semi-supervised semantic segmentation method based on the mixed pseudo label.For the same input image,consistency is imposed on two segmentation models perturbed with different initialization.The one with higher confidence in the pseudo labels generated by the two segmentation models is selected as the mixed pseudo label to supervise the above segmentation models simultaneously.There are two advantages to this: 1)the quality of pseudo labels is improved;and 2)the segmentation network can not only use divergence between models to locate its own errors,but also continuously consolidate what it has learned;In addition,this paper also incorporates Class Mix augmentation to mix input images to obtain more diverse training data.Experiments show that the algorithm is superior to the CPS+Cut Mix algorithm,with a maximum improvement of +1.02(%). |