Font Size: a A A

Research On Multi-source Image Fusion Method Based On Multi-scale Transform

Posted on:2020-09-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:F WangFull Text:PDF
GTID:1488306740471874Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Given the inherent characteristics of various different types of imaging sensors,image information collected by different types of imaging sensors are different and there are certain redundant and complementary information from these sensors.Even the same type of sensor with the same imaging type but different transducer parameters or focal range can also collect complementary and different information from the same background or target object.Multi-source image fusion is designed to integrated different types of image information aquired in the above two situations to generate a composite image that can describes the background or target more comprehensively,clearly and accurately,thus providing a more reliable information source for subsequent human visual perception and target detection.Multi-scale transform(MST)theory is the characterised by multi-scale,multi-direction and anisotropy,and is a widely used method for image fusion.In this paper,we deeply analyzed the deficiency of several image fusion methods domestic and overseas,and researched on MST-based multi-source image fusion by using the non-subsampled contourlet transform(NSCT),shearlet transform(SHT),and non-subsampled shearlet transform(NSST)according to MST theory.The detalis are as follows:1.Aiming at the problem that infrared and visible image fusion method based on MST displays the poor contrast and brightness of fusion image,an infrared and visible image fusion method based on NSCT and multi-scale sequence toggle operator(MSSTO)feature extraction was proposed.NSCT was used to decompose the infrared and visible images in multi-scale and multi-directions.In this paper,for the fusion of NSCT outputs the low-frequency components,MSSTO was used to extract the light and dark features of the low-frequency components of the above two images,and the features were incorporated into the low-frequency sub-band coefficients of the weighted average fusion to obtain the low-frequency components of the fusion;For the fusion of NSCT outputs the high-frequency components,the rules of mixing the fixed region spatial frequency and the fixed region energy are used for fusion.Four different types of infrared and visible fusion results show that the method proposed in this paper can not only solve the serious problem of information loss in the fusion image,but also effectively improve the brightness and contrast of the fusion image.2.Pulse coupled neural networks is a networks model,which is built by simulating the neuronal activity in the visual cortex of mammals,The network has the characteristics of impulse synchronization of neurons with the same characteristics of stimulation,and the form of signal and the mechanism of image processing are consistent with the physiological basis of human visual system,which has unique advantages over other image processing methods.However,when PCNN is used for image fusion,it defects in plethoric undetermined parameters,poor adaptability of link strength,and too frequently overlapping setting.In this paper,a dual channel unit-linking memristive pulse coupled neural networks(DUM-PCNN)is developed.Compared with the original PCNN model,DUM-PCNN simplified some peripheral parameters,used the fixed spatial frequency as the adaptive link strength,defined the time matrix as the adaptive determinate network iteration times,and had the characteristics of memory and global coupled.On this basis,this paper applied the model of color fusion with infrared and low-light night vision images,and providing a color fusion method of night vision images based on NSST and DUM-PCNN.NSST was used to decompose infrared and low-light images to obtain the low-frequency and high-frequency components respectively.For the low-frequency components obtained by NSST decomposition,the Kirsh feature energy is constructed with an 8-direction template,and the maximum value rule of Kirsh feature energy is adopted for fusion.For the high frequency components obtained by NSST decomposition,Kirsh feature energy is used to excite DUM-PCNN model for fusion,then the fusion image,infrared image and low-light image are combined to form the YUV color space to obtain the pseudo-color image,and the color reference image is converted into the YUV space for color transmission of the pseudo-color image.The transferred image is converted into RGB color space,and the dyed false color fusion image is obtained.The experimental results of infrared and low-light night vision image fusion of four different scenes show that the proposed method improves the color distortion and achieves good visual effect.3.Aiming at the problem of edge blurring in the fusion of infrared and Synthetic Aperture Radar(SAR)images based on MST,an infrared and SAR image fusion method based on multi-scale Nonlcal Shear Directional Filter(MNSDF).This method used multi-scale non-local mean filtering replaced non-subsampled laplace pyramid filter to decompose the image in multi-scale,and a MNSDF decomposition tool is constructed by combining it with multi-direction shear.The MNSDF decomposition tool was used to decompose the infrared and SAR images,and obtained the approximate sub-band and directional sub-band.For the approximate subband of MNSDF decomposition,the rule of taking the maximum value of the total energy obtained by combining the fixed region energy and gradient energy are adopted for fusion.For the directional subband of MNSDF decomposition,a fusion rule based on the absolute value of the directional subband coefficient and the gradient energy in the fixed region are proposed four fusions.MNSDF inverse transformation is applied to the approximate subband and directional subband of fusion to obtain the fusion image.Experimental results of four groups of infrared and SAR image fusion show that the proposed method can obtain the fusion image with the clearest edge and contour.4.Aiming at the problem of pseudo-gibbs ringing effect in the edge region of fusion image obtained by multi-focus image fusion method based on MST,a dual-domain joint multi-focus image fusion method is proposed.SHT is used to decompose the multi-focus image,and the sub-band coefficients of low frequency,medium high frequency and the highest frequency are obtained respectively.For the low-frequency sub-band coefficient of SHT decomposition,three active measurements(TAM)are proposed to stimulate the Spiking Cortical Model(SCM),and the rules based on total ignition times and the absolute value of low-frequency sub-band coefficient are adopted for fusion.For the medium high frequency and highest frequency sub-band coefficients of SHT decomposition,the rules based on local gradient energy and spatial frequency are used for fusion respectively;For the low-frequency,medium-high frequency and high-frequency sub-band coefficients of fusion,the preliminary temporary fusion image is obtained by inverse SHT transformation.The initial temporary fusion image is subtracted from the original image to obtain the difference image.The Laplacian energy of the local region is used to detect the focus region of the difference image,and the focus region is used to guide the fusion of the original multi-focus image to obtain the final fusion image.The results of four groups of multi-focus image fusion experiments show that the method proposed in this paper can effectively reduce the edge Gibbs ringing effect of the fusion image and obtain a good fusion visual effect.
Keywords/Search Tags:Image fusion, Multi-scale transform, Shear direction filtering, Pulse coupled neural network, Active measurement, Spiking cortical model
PDF Full Text Request
Related items