Font Size: a A A

Multisensor Image Fusion Based On Multiscale Complex Transform

Posted on:2017-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:Z K MaFull Text:PDF
GTID:2348330488474088Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Image fusion aims to combine the information from different images of the same scene and produce an image that is more suitable for human perception or subsequent image processing tasks. It has been applied to many fields, including defense surveillance, remote sensing, medical imaging, computer vision, and so on. Compared to those obtained by multiscale real transforms, e.g., discrete wavelet transform, the coefficients obtained by multiscale complex transforms contain magnitude as well as phase information. In contrast with the magnitude, the phase of the complex transform coefficients contains more useful information, e.g., spatial structure features. However, most of existing multiscale complex transform based image fusion methods only employ the magnitude information and ignore the phase information. The dissertation mainly studies multiscale complex transform based image fusion algorithms. The main research work includes:First, the dissertation discusses some basic steps of traditional multiscale transform based image fusion methods, which include the multiscale decomposition and reconstruction, definitions of similarity and saliency measures, fusion rules, and so on. Besides, some advantages and drawbacks of these methods are discussed.Then, we study the phase information of the multiscale complex transform coefficients, which are usually discarded in the existing multiscale complex transform based image fusion methods. By combining magnitude and phase information, a novel multimodality image fusion algorithm is proposed. The algorithm employs the shiftable complex directional pyramid transform(SCDPT) as the multiscale transform tool, with which, the source images are decomposed and reconstructed. We construct a novel similarity measure(CCC-EM) by combining the circular correlation coefficient(CCC) of relative phase and the energy matching index(EM). The bandpass subband coefficients are complex-valued. As well, the magnitude information reflects the strength of gray changes, while the phase information reflects the direction information of gray changes. By using the CCC-EM index, the bandpass complex directional subbands are divided into three kinds of regions. Different saliency measures and fusion schemes are designed for different kind of regions. We employ the traditional structural similarity(SSIM) index to divide the lowpass subbands into different types of regions due to the fact that the lowpass subband coefficients are real-valued. Proper fusion schemes are then employed for different type of regions when the lowpass subband coefficients are merged. The experimental results show that the proposed image fusion algorithm can better deal with the redundant and complementary information between multimodality images, and thus obtain a fused image with higher contrast.Finally, because the traditional multiscale transform tools only have a limited number of directional subbands and do not have steerability, they cannot extract the accurate directional information when applied to image fusion. The monogenic wavelet transform not only provides the magnitude and instantaneous phase information, but also provides the directional information, which can be used to estimate local preferential direction. Accordingly, the transform coefficients can be rotated to the preferential direction. Therefore, we propose a novel multifocus image fusion method based on monogenic wavelet transform. The monogenic wavelet transform consists of radial wavelet transform and directional wavelet transform. Because radial wavelet coefficients contain all of the directional information, a focus measure based on radial magnitude information is designed. By utilizing the steerability of directional wavelet, we design a focus measure based on directional information. By combining the two focus measures, we propose a novel multifocus image fusion scheme. The experimental results show that the proposed image fusion method can better extract the focused regions from the source images than the traditional multiscale transform based image fusion methods do.
Keywords/Search Tags:image fusion, shiftable complex directional pyramid transform, monogenic wavelet transform, joint magnitude and phase information
PDF Full Text Request
Related items