Font Size: a A A

Research And Implementation Of Landscape Images Outpainting Based On Generative Adversarial Networks

Posted on:2022-02-15Degree:MasterType:Thesis
Country:ChinaCandidate:D Y MengFull Text:PDF
GTID:2558307145463704Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
Image inpainting is the process of reconstructing the missing part of the image.Image outpainting belongs to the field of image inpainting.According to the known image information,the out-of-boundary part that does not exist in the original image is generated.Generative Adversarial Nets(GAN)is a model used for data generation,and a better output is obtained through the adversarial game between the generator and the discriminator.Compared with other image inpainting methods(autoencoders,sequences,etc.),the results of image inpainting methods based on generative adversarial networks are more realistic.This thesis takes landscape image as an example,and uses generative adversarial nets to study the problem of landscape image outpainting.The main research contents of this thesis are as follows:1.Construct a landscape image data set,the training set includes 12,000 scene images of mountains,oceans,islands,etc.,and the test set is 200,with a resolution of 256×512.The images come from manually filtered images in the datasets of Google,Baidu,and Places365-Challenge.Data sets are divided into two categories:simple scenes(single level of texture and structure)and complex scenes(complex level of texture and structure).2.Drawing lessons from the WGAN-GP confrontation network structure,design S_IOGAN(Simple scene_Image Outpainting GAN)network to achieve landscape image outpainting.The network includes a generator and two discriminators.The generator consists of two parts:a feature learning module and an extended generation module.The modules are connected by sub-pixel convolution.The feature learning module is composed of six convolutional layers,two deconvolutional layers,and four residual blocks based on dilated convolution(each residual block contains two 3×3 dilated convolutions).The sub-pixel convolution consists of a 3×3 size convolution kernel.The extended generation module adds three convolutional layers on the basis of the feature learning module.The discriminator is composed of eight convolutional layers,a fully connected layer,and a flatten layer.The model uses a fine-tuning training method.The experimental results show that the PSNR and SSIM of the S_IOGAN test results are increased by 0.405 and 0.0209 respectively compared with the mainstream algorithm(WS),and the test time for a single image is about 0.245 seconds.3.Drawing lessons from the idea of attention mechanism and dense convolutional network,design C_IOGAN(Complex scene_Image Outpainting GAN)network on the basis of S_IOGAN network to further improve the quality of the outpainting result.C_IOGAN includes a generator composed of a feature learning module GA and an extended generation module GB,and two discriminators.Specific improvement methods:(1)Replace the residual block of the S_IOGAN generator with dense blocks based on dilated convolution(each dense block contains four 3×3 dilated convolutions).(2)Add an attention module after the sixth convolutional layer of the feature learning module,and replace the original feature map with the attention feature map.The module is composed of three convolutional layers and a Softmax layer.(3)Up-sampling and convolution are used to replace the deconvolution layer of the extended generation module.The experimental results show that the PSNR and SSIM of the C_IOGAN test results are increased by 0.360 and 0.0131 respectively compared with S_IOGAN,and the test time required for a single image is about 0.352 seconds.
Keywords/Search Tags:Image Outpainting, Generative Adversarial Nets, S_IOGAN, C_IOGAN, Attention Mechanism
PDF Full Text Request
Related items