Font Size: a A A

Research On Wavelet-Based Pixel-Level Image Fusion Algorithms

Posted on:2009-07-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:B YangFull Text:PDF
GTID:1118360242476137Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Image fusion is the process by which multiple images of the same scene are combined to generate a more accurate description of the scene than any of the individual source images. Fusion process can be performed at different levels of information representation, sorted in ascending order of abstraction: signal, pixel, feature and symbol levels. Pixel-level image fusion refers to the process directly based on the pixel information from individual sensors; fusion result is an image, which usually more suitable for human and machine perception, or further image-processing tasks, such as segmentation, feature extraction and object recognition. Almost all image fusion algorithms developed to date fall into this category. Pixel-level image fusion has a wide range of applications in military, remote sensing, medical imaging, robots, security and surveillance, etc. After over twenty years of development, pixel-level image fusion forms a generic image fusion scheme—multiscale-decomposition-based image fusion scheme, represented by the pyramid and the wavelet methods.In this dissertation, wavelet-based pixel-level image fusion algorithms are studied. The research focuses on solving the problems of shift-variance and redundancy in wavelet algorithms. The core of wavelet algorithms—wavelet-multiscale-decomposition is discussed emphatically. The main contributions of this work are summarized as follows: 1. A novel image fusion algorithm based on the low-redundancy discrete wavelet frame is proposed. The merits and demerits of the standard discrete wavelet transform and its undecimated version in image fusion are studied; following this, the conception of separating filtering level and resampling level is proposed, an extended model for wavelet transforms is derived, and the"frame"property of the model in l 2 space is proved. The resampling strategies of the model are optimized to decrease as possible the redundancy of decomposition coefficients under the condition of good shift invariance; as a result, the low-redundancy discrete wavelet frame is obtained. The frame is incorporated into the multiscale fusion scheme. The fusion experiments based on synthetic and real-world images (sequences) demonstrated that the fusion algorithm based on the frame does overcome the shift variance problem of the standard wavelet algorithm, improving the results of fusion; at the same time, it avoids the problem of excessive computational cost, which is caused by the undecimated wavelet algorithm in solving the same problem, due to overmuch redundancy.2. A novel image fusion algorithm based on the quincunx-sampled discrete wavelet frame is proposed. Firstly, the replaceability of the resampling lattice for multi-dimensional perfect reconstruction filter banks is proved, and a replacing condition is given; then, it is shown that the redundant perfect reconstruction filter banks derived by the condition constitute tight frames in l2( Zn) space. Following the above results, the quincunx-sampled discrete wavelet frame is obtained by replacing the rectangular resampling lattice of the standard separable two-dimensional discrete wavelet transform. The frame provides near shift invariance as well as very low redundancy; in addition, it has intermediate scales, increasing the sampling in frequency. High-quality fusion results can be generated quickly using the frame in the multiscale fusion scheme.3. Two nonlinear wavelet image fusion algorithms are proposed: the fusion algorithms based on the undecimated morphological Haar wavelet transform and based on the undecimated max-lifting scheme. The morphological Haar wavelet transform and the max-lifting scheme, compared with the linear wavelets, have advantages in terms of computation, pixel information extraction, hardware implementation, etc; however, two transforms lack shift invariance due to downsampling steps, resulting in severe disturbing information in fusion images. In this work, two transforms are extended for shift invariance using the undecimated method. The extended transforms are incorporated into the multiscale fusion scheme, producing inspiring results, especially in medical images fusion and visible light - infrared image sequences fusion.4. A visible light and infrared dynamic image fusion system is developed based on the fusion algorithms, existing and new proposed in this work. The system provides real-time registration and synchronous capture (store) to the frames generated by visible-light and infrared imaging devices; it also provides off-line fusion processing to the captured frames using various algorithms, as well as assessment for the fusion results.
Keywords/Search Tags:Image Fusion, Pixel-Level Fusion, Wavelet, Multiscale Decomposition, Frame, Filter Bank, Shift Invariance, Nonlinear
PDF Full Text Request
Related items