Font Size: a A A

Research On Methods For Pixel-level Multi-source Image Fusion

Posted on:2017-05-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y LiuFull Text:PDF
GTID:1108330485451540Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years, a variety of imaging devices have entered into people’s daily life, and it becomes more and more convenient to obtain different categories of images. However, due to the limitation of imaging mechanism, the images obtained by a specific imaging device are usually unable to meet the demand of a certain application. In addition, the images obtained by the same category of device may also vary greatly under different imaging conditions (e.g., the imaging parameters such as focal length and exposure time), while the image achieved under a fixed imaging condition cannot reflect the whole information of a scene. As a result, merging the information provided by multiple images obtained with different imaging mechanisms or conditions to accomplish a specific task has emerged as an urgent research subject. By designing image processing algorithms, pixel-level multi-source image fusion aims at generating a composite image (known as fused image) by integrating the complementary information from multiple input images (known as source images) of the same scene. The fused image should describe the scene more completely and accurately than any individual source image. Multi-source image fusion technique has exhibited high applicable value in various fields such as video surveillance, medical diagnosis, remote sensing and digital photography.Under the above circumstance, this dissertation concentrates on the study of several types of image fusion problems including multi-focus image fusion, multi-exposure image fusion, visible-infrared image fusion and multi-modal medical image fusion, etc. In this dissertation, several new transform domain and spatial domain image fusion methods are proposed, aiming to make some contributions on promoting the progress of image fusion research.The primary contents and novelties of this dissertation are listed as follows:1. On the study of multi-scale transform based image fusion methods, considering that traditional wavelet transform based fusion method is not shift-invariant, we propose a multi-focus image fusion algorithm based on wavelet transform and adaptive block in this dissertation. The proposed algorithm is implemented under the framework of discrete wavelet transform. For the low frequency coefficients, an adaptive block-based fusion technique is adopted, where the optimal block size is calculated using differential evolution algorithm. Moreover, a pixel-level label map, which can accurately indicate the focus property of each coefficient, is obtained by refining the initial low frequency fusion result. The high frequency fusion task is accomplished by combining the local wavelet energy based rule with the information provided by the label map. Finally, the fused image is obtained by performing the inverse discrete wavelet transform. Experimental results demonstrate that the proposed fusion method can overcome the defect of shift-variance to some extent to obtain better fusion results in the regions which are not accurately registered. In addition, the proposed can effectively prevent block artifacts which are usually introduced by the block-based fusion methods in spatial domain.2. On the study of sparse representation based image fusion methods, considering the contradiction between the representation ability and insensitivity to noise of the dictionary, an image fusion method based on adaptive sparse representation is proposed in this dissertation. During the training process, instead of learning a single highly redundant dictionary in the traditional sparse representation based fusion methods, a set of more compact sub-dictionaries are learned from numerous high-quality image patches which have been pre-classified into several corresponding categories according to their gradient information. During the fusion processing, one of the sub-dictionaries is adaptively selected for a given set of source image patches. Experimental results demonstrate that the proposed method can solve the above contradiction and outperform the conventional sparse representation based method when the source images are corrupted by noise.3. On the study of transform domain image fusion methods, considering the inherent defects of multi-scale transform based and sparse representation based fusion methods, we propose a general image fusion framework by combining multi-scale transform and sparse representation. In our fusion framework, the low-pass bands are merged with a sparse representation based fusion approach while the high-pass bands are fused using the absolute values of coefficients. Compared with the traditional multi-scale transform domain based fusion methods in which the low-pass bands are merged with the averaging rule, the proposed method can effectively prevent the loss of energy to preserve image contrast. Furthermore, it can also well overcome the difficulty in selecting decomposition level of multi-scale transform. Compared with the conventional sparse representation based fusion methods, the proposed method separates the high frequency component and low frequency component from original images. As a result, it can prevent spatial inconsistency as well as the loss of spatial details in the fused image. Experimental results demonstrate that the proposed method can well overcome the defects of traditional transform domain fusion methods and obtain improved fusion results. In addition, for different categories of image fusion tasks, we study the optimal multi-scale transform and its decomposition level.4. On the study of spatial domain multi-focus image fusion methods, considering that conventional methods often do not work well in the regions which are not accurately registered, we propose a new multi-focus image fusion method based on dense scale invariant feature transform (SIFT). The SIFT descriptor is first employed as focus measurement to obtain a reliable initial decision map for fusion. Then, with the capacity to measure local similarity, the SIFT descriptor is used to match the mis-registered pixels to refine the fusion result. Experimental results demonstrate that the proposed can be competitive with or even outperform the state-of-the-art fusion methods in terms of both subjective visual perception and objective evaluation metrics.5. On the study of spatial domain multi-exposure image fusion methods, considering that traditional methods usually do not work well in removing ghosting artifacts when the scene is dynamic with moving objects, we propose a new multi-exposure image fusion method based on dense SIFT in this dissertation. As local feature descriptors own the ability to measure local contrast and local similarity, the dense SIFT is simultaneously used for local contrast extraction and ghosting artifact removal in the proposed algorithm. Experimental results demonstrate that the proposed method can outperform conventional exposure fusion methods in terms of both visual quality and objective evaluation.
Keywords/Search Tags:Image fusion, Transform domain, Spatial domain, Multi-Scale transform, Wavelet transform, Sparse representation, Dictionary learning, Dense SIFT, Activity, level measurement, Local similarity measurement, Ghosting artifacts
PDF Full Text Request
Related items