Diabetic Retinopathy(DR)is a common retinal complication caused by diabetes.With the rapid development of deep learning,medical image diagnosis is widely used in the field of DR intelligent diagnosis.Deep neural networks often require a large amount of training data labeled by professional doctors,which is scarce and unbalanced in practice,particularly for the exact lesion location,thus limiting the application of deep learning technology in DR intelligent diagnosis.Considering the scarcity and unbalance of labeled fundus images,the current effective method is to synthesize fundus images using generative models.However,current methods excessively rely on vessel and lesion masks of the input images and ignore the class-consistency between the synthesized images and real images.To address the shortcomings of the present research,we conduct the following two aspects of research work:1)A fundus image generation model is proposed based on the idea of classconsistency.This model uses conditional generative adversarial networks as the backbone network and adds class-consistency loss and improved retinal detail loss to it,which jointly constrain the generator to generate high-quality retinal images with adversarial loss.Class-consistency loss function is used to constrain the class features of the real image and the synthesized image to be consistent,which forces the synthesized images to have the same pathological features as the corresponding class.The improved retinal detail loss adds the low-level semantic feature constraint to the high-level semantic feature constraint,which jointly drive the synthesized images to have consistent physiological features and pixel features with the real images.2)A fundus image generation model is proposed based on the idea of unpaired image-to-image transformation.From the perspective of the transformation between normal retinal images and DR images,this model uses cycle-consistent adversarial networks as the backbone network to accomplish the transformation between the two images without paired vessel images and lesion masks.In addition,the detail quality of the synthesized images is improved by adding a channel-spatial attention module.Finally,class-consistency loss,adversarial loss and cycle-consistency loss are used to jointly constrain the model to learn the mapping relationship between the normal retinal images and the DR images.Both of the above models were experimentally validated on public retinal image datasets and evaluated from both subjective effects and objective indicators,including FID and SWD,which are used to evaluate the quality and diversity of the synthesized images,and the accuracy of vessel segmentation and classification.Experimental results show that compared with the current methods,the synthesized retinal images based on the above two models have good visual effects and comparable advantages in each objective evaluation criterion,but the expressiveness of lesion features needs to be improved. |