Font Size: a A A

Research On Multi-Sensor Image Fusion Algorithm Based On Multiscale Decomposition

Posted on:2010-06-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:C Q YeFull Text:PDF
GTID:1118360275997730Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Image fusion is an important part of multi-sensor information fusion. It is also an important and useful technique for image understanding and computer vision. Image fusion is the process by which multiple images of the same scene are combined to generate more complete and accurate description of the scene than any of the individual source images. The fused image can provide useful information for further computer processing, for example, image segmentation, object recognition, object detection, battle damage evaluation and understanding, and so on. The technique of image fusion has been widely used in many fields such as remote sensing, military application, robot engineering, medical imaging, and so on.This dissertation mainly aims at the research of multisensor image fusion algorithms based on the multiscale decomposition. In order to solve the issues of existed fusion algorithms which do not take the intrinsic characteristic of the source images into account, the priori information such as the imaging mechanism of image sensors and the imaging characteristic of the source images has been deeply analyzed in this dissertation. Several image fusion algorithms that adapt to the characteristic of the source images have been proposed based on the multiscale geometric analysis tools such as the redundant wavelet transform and the nonsubsampled contourlet transform.The main contributions of this dissertation are summarized as follows:1. Aiming at the ringing effect arising from shift-variance in the orthogonal discrete wavelet transform, a novel gray-scale multifocus image fusion algorithm based on redundant wavelet transform is proposed. According to its imaging principle, the defocused optical imaging system can be characterized as a lowpass filtering. Therefore, in multifocus images, a pixel or region in focus or out of focus can be determined by its corresponding high frequency information. On the basis of the above theoretic evidence, the region vector normal and the local contrast are introduced in the redundant wavelet transform domain, and the selection principles based on the region vector normal and the local contrast are presented for the low frequency subband coefficients and the high frequency subband coefficients respectively. The algorithm can preserve the useful information of the source images and overcome the ringing effect from the final merged image reconstructed by orthogonal discrete wavelet transform. It can get clear focus in the whole fused image.2. Combining with the excellent characteristics including multiscale, multi-direction and shift-invariant in the nonsubsampled contourlet transform, an image fusion framework based on the nonsubsampled contourlet transform is proposed. In the light of imaging characteristic of infrared and visible images respectively, two algorithms for fusion of infrared and visible images based on the nonsubsampled contourlet transform are proposed. One is a window-based algorithm, in which a selection principle based on the local energy and local variance for the low frequency subband coefficients and a selection scheme based on the local directional contrast for the high frequency subband coefficients are presented. It combines the hot object information of the infrared image with the rich spectrum information of the visible image together. The other one is based on region segmentation, in which the fusion idea of region division is introduced. Two measurements named ratio of region energy and ratio of region sharpness are presented to characterize the regional salience information. They used to guide the selection of the fusion coefficients in the nonsubsampled contourlet transform domain. The algorithm takes the relative pixels as a whole to participate in the fusion process so that it has better fusion performance than the pixel-based algorithm and the windows-based algorithm.3. After analyzing the problem of spectral distortion in the fused remote sensing image, a novel fusion algorithm for multi-spectral and panchromatic images based on the region correlation coefficient in the nonsubsampled contourlet transform domain is proposed. According to the fusion idea of region division, the measurement named region correlation coefficient is presented. The source images firstly are split into different regions with various spatial characteristics, and then different fusion rules are employed according to the degree of correlation between the multi-spectral image and the panchromatic image. The algorithm has a good balance between the spectral information and the spatial information. The fused multi-specral image can reduce spectral distortion and improve spatial information at the same time. Especially, it has preserved the salience feature information of the original multi-spectral image.4. Focusing on the fusion of SAR and panchromatic images,a novel fusion algorithm based on the imaging characteristic of the SAR image is presented. Two measurements named region information entropy and ratio of region mean are presented in the nonsubsampled contourlet transform domain so that the SAR image can be split into roughness regions, smoothness regions and highlight point target regions. The algorithm performs the different fusion rules for each particular region independently. The fused image not only joins the target information in the SAR image that is difficult to identify in the panchromatic image but also preserves the spatial resolution of the panchromatic image.
Keywords/Search Tags:image fusion, redundant wavelet transform, nonsubsampled contourlet transform, local directional contrast, region correlation coefficient
PDF Full Text Request
Related items