Font Size: a A A

Research On Image Fusion Method Based On Deep Neural Network

Posted on:2020-03-05Degree:MasterType:Thesis
Country:ChinaCandidate:M Y XiongFull Text:PDF
GTID:2428330578464141Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Image fusion refers to the multi-source image of the same scene which containing complementary information that is generated into a fusion image more suitable for human or machine perception than the single image.Image fusion is able to improve the deficiency of using only a single sensor,and promote the fusion image to contain more and more reliable information.Image fusion technology has been widely used in military,remote sensing,security monitoring and medical image fields.Traditional image fusion algorithms are usually divided into two categories: transform domain based image fusion method and spatial domain based image fusion method.For the transform domain based image fusion method,the selection of multi-scale decomposition tools and the design of fusion rules are two key factors affecting the fusion results.For spatial domain based image fusion method,the size of block has great influence on the fusion effect.Due to the rise of deep learning methods,deep learning-based image processing methods have attracted great attention in the field of image processing.In recent years,Deep Learning(DL)has made great breakthroughs in many computer vision and image processing problems,such as image classification,image segmentation,image super-resolution and so on.Image fusion based on deep learning has become an active topic.In this paper,the research background,research content,research status,research-related deep learning technology,common image fusion methods and evaluation indexes of image fusion are introduced.Next,based on the research of traditional image fusion method,analyzing the research status of convolution neural network in image fusion,utilizing the convolutional neural network to the advantage of image feature extraction,,convolution auto-encoder network based image fusion method was proposed,different neural network framework has been designed for training and testing.The convolutional auto-encoder network framework proposed in this paper is introduced from the aspects of network structure design principle,visual analysis of coding features,fusion rule design,experimental setup,network training and testing,etc.,and then the experimental results are verified and analyzed through subjective visual evaluation and objective metrics.The main research contents of this paper are as follows:(1)Most of the existing image fusion methods based on convolutional neural network are supervised learning methods,which need a lot of training data and supervised labels.The acquisition of such prior information requires a large amount of manpower input,besides,marking accuracy directly affects the discrimination of the area to be fused,moreover,the applicable scope of artificial markup tags is relatively limited,combining characteristics of image to be fused,this paper proposed an end-to-end unsupervised convolution auto-encoder network for image fusion which is suitable for multitasking parallel training.The convolutional auto-encoder network can overcome the lack of image data set under supervised learning.On the basis that the network can reconstruct the source image well,a fusion unit based on the feature layer is designed,and the fused features will directly output the fusion results through the decoder layer of the network.The experimental results show that the subjective visual effect of the fused image obtained through the proposed network is natural and clear,the objective evaluation metric is better than the contrast algorithm,and the fusion result with better quality can be obtained.(2)Aiming at the problem of edge information loss in traditional image fusion methods based on spatial domain,a joint convolutional auto-encoder network which is oriented to multisource homomorphic image feature learning is designed to improve the quality of multi-focus image fusion.Considering priors redundancy and complementarity of multi-focus image,the common features and private features of multi-source images are expressed to different network branches.Based on this characteristic,a fusion rule is designed according to utilizing the location-related activity measurement of private features to express the focus discrimination of multi-focus images to realize the fusion of multi-focus images in the spatial domain.Compared with the mainstream LP,PCNN,DTCWT,NSCT,CVT,SR and CNN methods,the fusion effect is better,which further tests the feature extraction ability of the joint convolutional auto-encoder network for multi-source images.(3)In order to further extend the joint convolutional auto-encoder network to multi-source multi-mode image feature extraction and to better consider the complementary and redundancy relationship between infrared and visible image so as to obtain better results of infrared and visible image fusion result,we further explore and study the joint convolutional auto-encoder network.Firstly,the joint convolutional auto-encoder network is used to train the infrared and visible images simultaneously to learn the complementary and redundant features of the infrared and visible images.In order to improve the features learning ability of the joint convolutional auto-encoder network,some learning weights of VGG19 were transferred to the encoder layer of the network by fine-tuning the multi-mode image fusion task,and the multitask loss function related to the image fusion quality was designed for training in the network.According to the characteristics of the redundant and complementary features of the images to be fused,the fusion rules are designed to realize the fusion of feature layers,the fused features are decoded and reconstructed to obtain the fused image.Subjective visual effect and objective experimental metrics show that the proposed algorithm performs well in the fusion of infrared and visible image fusion problem.
Keywords/Search Tags:image fusion, auto-encoder network, convolutional neural network, joint convolution auto-encoder network, transfer learning
PDF Full Text Request
Related items