With the increasing demand for technology,Image-to-image translation technology plays an important role in various fields.For example,in the entertainment industry,this technology can be used for beauty enhancement and AI face swapping.In the field of public security,it can help the police to simulate and reconstruct the appearance of suspects who are not wearing masks or glasses.In addition,in the medical field,this technology can assist doctors in diagnosis and surgical planning.It can be foreseen that with the continuous advancement of technology and the expansion of application scenarios,Image-to-image translation technology will play an important role in more fields.Image-to-image translation technology aims to convert source domain images into target domain images.However,in real-world scenarios,images can be affected by various factors such as changing shooting angles,lighting conditions,and complex backgrounds,posing significant challenges to image-to-image translation tasks.Existing multi-domain image-to-image translation methods often exhibit poor performance in diversity and detail quality when dealing with complex image-to-image translation tasks.With the continuous development of image-to-image translation technology,using generative adversarial networks for image transformation has become mainstream.Therefore,this paper is based on Star GAN-V2,proposes a new multi-domain image-to-image translation model named DFE2C-GAN,and builds a multi-domain image-to-image translation system on this model.This paper focuses on the following research topics:(1)To address the problem of insufficient diversity in generated images,we adopted a contrastive learning method and introduced contrastive loss to replace the style diversity loss in the original Star GAN-V2.This approach enables the model to consider both "close " and "far" relationships,learn an unbiased distribution,and improve the diversity of generated images.(2)To address the problem of blurry details in generated images,we propose a dynamic feature enhancement module as a downsampling layer.The dynamic feature enhancement module combines dynamic convolution and attention mechanisms,which can adjust the position of the filter adaptively based on the input image features,extract more detailed features,and provide important guidance during generator upsampling,thus improving the quality of the model’s generated images.Finally,extensive experiments verify the effectiveness of the proposed DFE2 CGAN.Based on this algorithm,a multi-domain Image-to-image translation system has been designed and developed according to market user requirements.This system not only has efficient Image-to-image translation capabilities,but also features a user-friendly interactive interface and a simple and easy-to-use operating process,providing users with a good user experience.At the same time,the system also realizes high scalability and can easily add new datasets and task models,providing users with more diverse image translation options.Through actual testing,the system has achieved the expected requirements in terms of performance and effect. |