Font Size: a A A

Research On Algorithm For Multi-source Image Fusion

Posted on:2019-01-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:X YanFull Text:PDF
GTID:1368330572451485Subject:Physical Electronics
Abstract/Summary:PDF Full Text Request
With the development of the imaging sensors,various imaging devices have been widely used in different communities and our daily life,it will brings great convenience to the development of industry and people's lives.However,there is much difference of imaging mechanism between different kinds of imaging sensors,which makes the images obtained by these devices cannot be suitable for some special applications.What's more,the images obtained by the same imaging device form different conditions such as focal length and exposure time,there are obvious differences between them.To address these issues,the multi-source image fusion technology is a good choice,which can obtain fusion images with more complete and clear target and scene information.These fusion images are more suitable for human visual perception and machine understanding than any single image.Thus,it has the capability to provide powerful tech automatically driving and other communities.Nowadays,multi-source image fusion technology has developed to a relatively high level.Nowadays,multi-source image fusion technology has developed to a relatively high level.However,due to the particularity of the captured image scenes,the current technologies are still not able to solve some problems well.This paper focuses on these issues.The main issues are infrared-visible and multi-focus image fusion.The main research content of this dissertation can be summarized as the following five parts:In order to illustrate the existing foundation of the research work in this dissertation,the first part briefly introduces the basic principle of multi-source image fusion,and summarizes the existing multi-source image fusion methods.Considering that the images obatined by traditional wavelet transform has low contrast and also introduce some artifact noise,we proposed a novel infrared and visible image fusion method based on spectral graph wavelet transform.This method mainly leverages the capability of shift-invariant of spectral graph wavelet and representing irregular regions of images to decomose the source images to spectral wavelet domain.Subsequently,calculate the salient features of source images and compare them to get the initial fusion weight maps.And then apply biliteral filter to optimize the above fusion weight maps to obtain the pleasing fusion weight maps.Experimental results demonstrate that the proposed method can well avoid suffering from low contrast and nosise pollution.Considering the inherent drawbacks of the transform domain and spatial domain based image fusion methods,we propose a multiscale directional nonlocal means filter based infrared and visible image fusion method.Under this framework,the decomposition and fusion approaches of source images are two key issues.To address thes issues,our proposed method performs multi-scale spatial filter to the source images to decompose them,respectively.In the fusion phase,the approximation subband and the directional detail subbands are fused by a local neighborhood gradient weighted fusion rule and a local high-order correlation fusion rule,respectively.Experimental results demonstrate that the proposed method can efficiently avoid sufferring from the patch effect in the fused image.For the traditional image fusion methods,the edge of the fused image is blurred and the fused images also suffer from the block effect when the input multi-focus image is not completely registrated.To address this issues,we present a multi-focus image fusion method using guided-filter-based difference image in this paper.First,a guided filter is employed in smoothing the source images to obtain the filtered images using the source images serve as guidance images.Subsequently,the source images are once again filtered by guided filter,which uses the filtered images obtained above as guidance images,and employ a mixed focus measure to these images to determine the initial decision map.Subsequently,the initial decision map is optimized by a morphological filter and a guided filter in turn to obtain the pleasing fusion maps.Experimental results demonstrate that our proposed method can well solve the block effect and edge blurring problems.Considering the block effect appears in the fused images obtained troditional spare representation based image fusion method,we we present a multi-focus image fusion method based on blur dictionary in this paper.In the learning blur dictionary phase,we use the multi-focus image dataset which has been widely used in image fusion research community.We leverages rolling guidance filter to smooth these images to obtain the training images since these smoothed images have similar structure and visual effect with defocus images.During the fusion phase,we perform the learned dictionary on source images to obtain the focus feature maps of them,and adopt an optimization approach to optimize the feature maps,in order to obtain the pleasing fusion weight maps.Experimental results show that this algorithm is able to efficiently learn the feature of defocused regions in a multi-focus image,and it can well fuse the boundaries between the focused and defocused regions,which will not introduce any artifact noises and overcome the pacth effect.Considering the exising multi-focus image fusion methods based on convolutional neural networks(CNN)require ground-truth for supervison learning,and they focus on classifying focused pixels and defoucsed pixels,and leverage the classifictaion results to construct the fusion weight maps,which perform a series of post-processing steps to optimize the intial fusion weight maps and obtain a desirable fusion weight map.In this paper,we introduce a fully end-to-end approach for fusing multi-focus image pairs that learns to directly predict the final fused images.In contrast to the existing methods that use convolutional neural networks,the proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image.We train the proposed network with deep unsupervised architecture utilizing a widely used no-reference image quality evaluation metirc and no-reference fused image quality evaluation metric to design a loss function,aims at optimizing the network model parameters.Experimetal results show that the propoesd method can obtain good results without any post-processing.
Keywords/Search Tags:Image fusion, Multiband image fusion, Sparse representation, Deep learning
PDF Full Text Request
Related items