Font Size: a A A

Image Translation Model Analysis Based On Image Quality Enhancement

Posted on:2020-12-28Degree:MasterType:Thesis
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:2428330575996895Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Many real-world computer vision tasks,such as watermark removal,stylization and ion,could be treated as an image-to-image translation problem.With the achievements of Generative Adversarial Networks(GAN)in image generation,GAN has been widely used for the image-to-image translation tasks.While these models rely heavily on the labeled image pairs,recently some GAN variants have been proposed to tackle the unpaired image translation task.These models exploited supervision at the domain level with a reconstruction process for unpaired image translation.In order to solve the problem of poor image quality generated by the existing image translation model without pairwise data,a feasible idea is to evaluate the quality of the generated image through the quality-aware loss function,thereby guiding the model to generate high-quality images.Based on this idea,this paper design two implementations as follows:In the second chapter,we propose an adaptive quality perception loss.Parallel works have shown that leveraging perceptual loss functions based on high level deep features could enhance the generated image quality.Nevertheless,as these GAN-based models either depended on the pre-trained deep network structure or relied on the labeled image pairs,they could not be directly applied to unpaired image translation.Thus,the proposed adaptive quality perception loss uses the features extracted by itself generator,the adaptive quality perception loss compares high level content structure between each original image and its reconstructed image to improve the quality of the generated image.Experiments on four commonly used datasets show that the proposed model can effectively improve the quality of generated images.In the third chapter,we propose a quality perception loss based on a classical image quality assessment measure.In unpaired image translation,few researchers have explored the possibility of improving the generated image quality from classical image quality measures.Based on a classical image quality assessment measure,we propose a quality-aware loss to ensure similar quality score between an original image and the reconstructed image at the domain level.The experimental results of four commonly used datasets show that the proposed model can effectively improve the quality of generated images.Finally,the advantages and disadvantages of the two proposed models are demonstrated through experiments and analysis.
Keywords/Search Tags:image-to-image translation, generative adversarial networks, image quality assessment
PDF Full Text Request
Related items