Font Size: a A A

Research On Pixel-level Multisensor Image Fusion

Posted on:2008-04-21Degree:MasterType:Thesis
Country:ChinaCandidate:K WangFull Text:PDF
GTID:2178360272967077Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
With the recent rapid developments in the field of multi-sensor technologies, a great increase of the amount of image data is available. The images from different sensors contain different information, and image fusion synthetizes source images to create a new image where the informative content is more suitable for human perception. The actual fusion process can take place at different levels (pixel-level, feature-level, and decision-level) of information representation. This thesis focuses on the algorithms of the pixel-level fusion process.The simple fusion approach such as pixel-average is easy to be implemented, but it is not appropriate since it creates a blurred image where details are often reduced. For this reason, many authors developed more efficient approaches to fuse multiple images. Fusion methods based on multi-scale transform fuse images in different scales and in different frequency levels, so more appropriate result images could be created.There are two research aspects about fusion methods based on multi-scale transform: the multi-scale transform tools and the fusion rules. In the aspect of transform tools, Laplacian Pyramid (LP) and Wavelet Transform (WT) are applied broadly. In capturing the geometry of image edges, however, there are limitations of the commonly used separable extensions of wavelet transforms. Some researchers recently pioneered a new system of representations named contourlet transform (CT) which is a"true"two dimensional transform that can captures the intrinsic geometrical structure that is key in visual information. Contourlets is soon utilized in image fusion and the result is better than WT. However, the contourlet transform is not shift-invariant, and exhibits artifacts due to aliasing in the fused image. The nonsubsampled contourlet transform (NSCT) is shift-invariant, and could also captures the intrinsic geometrical structure that is key in visual information. So NSCT is more appropriate to image fusion.In this thesis, some transform tools are studied such as: LP, WT, NSCT and so on. Then based on NSCT, two novel fusion approaches are proposed: the fusion method using definition and the fusion method using enhancement of local-contrast. Likewise, evaluation for fusion image is vital to fusion system. Some qualitative and quantitative criterions are also introduced in details. The results of the experiments indicate that the fusion method using definition could preserve distinct edge information and the fusion method using enhancement of local-contrast could make object regions more prominent.
Keywords/Search Tags:Pixel-level image fusion, Multi-scale transform, Image pyramid, Wavelet transform, Redundant wavelet transform, Nonsubsampled contourlet transform (NSCT), Fusion evaluation
PDF Full Text Request
Related items