Font Size: a A A

Researsh On Multi-Sensor Image Fusion Method At Pixel Level

Posted on:2014-05-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y F LiFull Text:PDF
GTID:1268330428975762Subject:Electrical system control and information technology
Abstract/Summary:PDF Full Text Request
With the development of sensor technology and image processing technology in recent years, the practical applicability of image fusion also constantly enhanced. Image fusion has also been extensively used in many areas from defense applications to civilian purposes. In many applications systems, such as remote sensing, situational awareness, intelligence gathering, all-weather surveillance, medical diagnostics, military, and robotics etc. the widespread use of multi-sensor and multi-spectral images has increased the importance of image fusion. Image fusion technique has been showing even more broader applications prospects at present.In this paper, the research work is focused on multi-sensor image fusion theory and algorithms such as infrared and visible image which be wide spread used in situation awareness, surveillance, target detection and tracking applications. Comprehensively take advantage of progress and achievement in image analysis and image understanding technology research field, the paper conducted more deeply investigation to obtain the effective processing and analyzing methods for multi-sensor image fusion at pixel level. Thus, the paper mainly aimed at to find better ways to fuse multi-sensor images which can effectively enhance the target feature synchronously in fusion process and obtain a good visual effect fused image, as well as to meet for real-time image fusion system needs. The main research work and achievements are as follows.1. Multi-scale transforms commonly used in image fusion methods have been reviewed and analyzed comprehensively. Its advantages and disadvantages have also been analyzed form the perspective of signal sparse representation. Then, the paper investigates shift dependency of various Multi-scale transforms and analyzes its effects on image fusion performance by quantitative and qualitative methods. We conduct experiments by combining8popular multi-scale transforms such as pyramid, wavelet and multiscale geometric analysis methods etc. with two popular fusion rules. By analyzing and comparing the experimental results, the paper proposed some guidance for using Muti-scale based fusion schemes.2. Most proposed fusion algorithms based on multi-scale transform attached more importance to design more delicate fusion rules for the detail coefficients, but generally use simple rules such as mean or weighted average to combine the approximate coefficients. However, due to the approximate coefficients represent the energy distribution of the source image in spatial domain, a simple approximation coefficients fusion rule would reduce the brightness and contrast of the fused image, which led to the source image with higher strength suppress or annihilate the others target characteristics and texture detail. Thus the visual effect and target feature detectability of the fused image would also be reduced. To solve this problem, this paper presents the approximate coefficient fusion rule based on the brightness remapping under using curvelet transform as multi-scale transform method. The experiment result shows that, beacause of taking into account the intensity and contrast characteristics of the source images. The proposed fusion rule can effectively increase target characteristics and exture detail in the weakness source image, and significantly improve the fused image’s dynamic range and target feature intensity.3. Limited by sensor physical properties or impacted by natural conditions, imagery performance often present as low contrast, narrow intensity or blurry visual effect, which in turn reducing quality of the fused image. To efficiently enhance fused images in fusion process, the paper proposed a novel image fusion algorithm using multi-scale top-hat transform. Multi-scale bright and dim salient features of the source images are extracted iteratively through top-hat transform using structuring elements with the same shape and increasing sizes. Then these multi-scale bright and dim features are combined by fusion rule. The enhanced fused image is obtained by weighting the bright and dim features according to specific requirements. Experimental results on infrared and visible images and other multi-sensor images fusion from different applications using different fusion algorithms verified that the proposed algorithm could efficiently and synchronously fuse and enhance the salient features of source images, and produce better visual effects and target detection or identification capabilities. In addition, according different application requirements, the proposed algorithm could pruduce different enhanced fuison result.4. In order to meet the requirements of real-time fusion system, the paper proposed a novel fast mutual modulation fusion (FMMF) algorithm for multi-sensor images. First, the two source images were magnified by factors that derived from the ratio of the corresponding pixel energy respectively; Then an offset entry that obtained by computing statistical parameters of source images add to it; Finally, after the previous results are multiplied and normed, the fused image is obtained. Fusion process consists of the addition and multiplication, which is a nonlinear combine process. Experimental results show that FMMF algorithm is simple and fast and its performance and efficiency is superior to those based on pyramid and wavelet.5. The paper reviews the past15years research in the field of night vision multi-sensor image coloration (render night vision image in color) and reveals the general coloration model. On this basis, a new coloration method using fast multi-modulation fusion (FMMF) and color transfer is designed for low-light and infrared image pairs. The coloration process is based on YCbCr color space. Fist, fused image using fast multi-modulation fusion to merge the source images information be assigned to the Y channel; then the Cb and Cr channel is combined using Toet’s method which extract the common component from source images. Finally, the false-color image is obtained by using color transfer technology to the prior pseudo-color YCbCr image. Experiments show that the result of our method is more salient information, higher color contrast, more natural color appearance than others. Due to the use of fast multi-modulation fusion, the coloration process is efficient and the parameters are adaptive, the proposed method meets the real-time applications.The researsh work of this paper aimed at to enhance fused image, as well as to meet for real-time image fusion system needs. The paper proposed a novel image fusion algorithm using multi-scale top-hat transform to enhance the target feature synchronously in fusion process; the proposed fast mutual modulation fusion (FMMF) algorithm can be used for real-time system; the paper reveals the general coloration model then proposed a new coloration method using fast multi-modulation fusion (FMMF) and color transfer for low-light and infrared image pairs. These fusion methods have important theoretical and practical values in research and application areas such as situation awareness, all-weather surveillance, target detection and tracking applications etc.
Keywords/Search Tags:image fusion, fast multi-modulation fusion, coloration fusion, multi-scaletransform, top-hat transform, color tranfer
PDF Full Text Request
Related items