| With the progress and development of science and technology,digital images have become an indispensable part of people's lives.However,comparing with the visual imaging obtained by people when they are in a real scene,the dynamic range of digital images which are widely used at present is limited,and the quality cannot satisfy the need of people.In this context,high dynamic range images have attracted people's attention and become a research hotspot in the field of image processing.In the existing research,merging images with different exposures is a common method.And the generated high dynamic range image must be recompressed into a low dynamic range image by a tone mapping algorithm before it can be displayed on the display device.In this paper,a multi-exposure image fusion method based on convolutional neural network is proposed,which skips the steps of traditional algorithms.The low dynamic range images of degrees are merged into an image through an improved encoder-decoder model with the specified image content as a reference.The research contents are as follows:(1)Based on the traditional encoder-decoder generation model,a skip-connection is used to construct a U-Net model,which better preserves the multi-level image information in the image fusion process.The image feature applies an alignment fusion based on the reference picture,and then the ghost problem in the process of multi-exposure fusion is handled well.On this basis,the residual unit structure is introduced,and the generation network becomes deeper with the image feature dimension not changed,enhancing the local detail quality of the generated image.(2)Introducing the generative adversarial network framework into the training process.This paper discusses the advantages and disadvantages of the generative networks with the L2 norm loss function and the discriminant network model.A patch discriminant network model is proposed for the poor local detail quality.Extensive experiments have proved that the proposed method improves the quality of image details in multi-exposure image fusion tasks and work effectively in removing motion effects. |