Font Size: a A A

Multi-focus Image Fusion Based On Generative Adversarial Network

Posted on:2022-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:Y DuFull Text:PDF
GTID:2518306338496114Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Multi-focus image fusion is a hot topic in image processing,which is mainly used for the fusion of multifocus images with different focus targets acquired under the same imaging conditions.In recent years,with the development of deep learning,the application of convolutional neural networks in multi-focus image fusion has become more and more extensive.The main idea of most multi-focus image fusion methods based on convolutional neural networks is to convert the problem of multi-focus image fusion into a binary classification problem of distinguishing focused pixels and defocused pixels,and then train a network which can discriminate the focus level of pixels thus can generates the decision map,and finally fuse the multi-focus images under the guidance of the decision graph.However,these multi-focus image fusion methods still face some shortcomings:First,the image fusion is realized based on the decision map,the fusion effect of which totally depends on the accuracy of the generated decision map,therefore,as the complexity of the image increased,the accuracy of the decision map will also decreased;secondly,the training process of convolutional neural networks requires a large number of labeled data sets,however,such data sets do not exist in reality.The existing convolutional neural network based methods simply blur the full-focus image by Guassion blurring,but it cannot completely represent natural images.In order to avoid the generation of decision map and increase the quality of fusion images,this thesie first designs a generative adversarial network with single discriminator for multi-focus image fusion.The network mainly includes two parts:generator and discriminator.The input of the generator is source multi-focus images and the output is the generated image,while the input of the discriminator is a "real"focus image or generated "fake" focus image.The generator intends to produce a clear enough "fake" focus image,while the discriminator attempts to distinguish the "real"image and the "fake" image as accurate as possible.The proposed network is an end-to-end model that can generate the final fusion image based on source multi-focus images without generating a decision map.In the aforementioned method,we regard the focus image as the real image of our discriminator,and set the multi-focus images,which are generated by locally blurring the focus image,as the input as our generator.This method cannot simulate natural images perfectly.Therefore,on the basis of the above method,we further propose a multi-focus image fusion method based on the dual-discriminaotor generative adversarial network.In this version,the multi-focus source image is not only the input of the generator,but also the real images of two discriminaotors respectively.Therefore,our network can be trained directly on the basis of the multi-focus image data set,while there is no need to generate a dataset by Gaussian blurring.In addition,this thesis creatively propose an adaptive loss function based on structural similarity to ensure the stability of the network training process.According to the structural similarity loss between the source image and the real image,the adversarial loss between the generator and the two discriminators are adjusted dynamically,so that the fusion effect of our model could be improved and the stability of our training is also guaranteed.Finally,the quality of the fusion images is also compared both in terms of subjective observation and objective evaluation.The experimental results show that our fusion image are more clearer and can obtain richer details.
Keywords/Search Tags:Multi-focus image fusion, generative adversarial network, deep learning
PDF Full Text Request
Related items