Font Size: a A A

Research On Fusion Of Infrared And Visible Light Image Based On Co-occurrence Analysis Shearlet Transform

Posted on:2022-11-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:B QiFull Text:PDF
GTID:1488306764499304Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Infrared and visible image fusion is the most representative and widely apllied information fusion technology at present.Because many visual images obtained by the human eye are from the visible wave band,the image obtained by the visible detector is closer to human daily life applications and more sensitive to human vision.Moreover,infrared imaging detector owns certain smoke penetration ability and stable anti-interference ability,by which the defect of visible imaging can be made up under night vision and low illuminaton conditions.Then,infrared images and visible images can complement each other.If the respective advantages of these two spectra can be effectively combined,the infrared targets' recognizability and the images' clarity can be enhanced,and plenty of accurate and detailed information can be retained.There is extremely high practical value in lots of fields for this research,such as military operations,electronic product monitor and resource detection.In the current research of infrared and visible image fusion algorithms,multi-scale geometric transformation is an extremely effective means,the main working principle of which is to decompose the source iamge into a series of subsband components of different scales,then design appropriate fusion rules to fuse each component,and obtain the final fusion image through inverse transformation.However,the current transformation tool has a weak ability to distinguish details such as edges,and it is easy to lose some information in the process of decomposition.In addition,the current commonly used fusion rules are not strong in resisting spectral differences between heterogeneous images,and the fusion effect on different scences is also unstable.In view of the above problems,this paper studies from the following aspects to improve the visual performance of the fusion algorithm:1.A multi-scale transformation method is proposed: Co-occurrence Analysis Shearlet Transform(CAST).It uses the co-occurrence information of the image itself to complete the operation of pixels,and directly applies the idea of edge detection to the filtering process,which is a perfect combination of edge detection and edge retention.And its internal parameters can be set according to the image feature values,so its adaptability is strong.Moreover,while the excellent directional sensitivity of shearlet transform is being taken into account,it can determine whether to smooth pixels according to the statistical information of texture appearance instead of intensity information,so the image has a finer multi-scale and multidirectional component information.Furthermore,CAST has translation invariance,and can complete image decomposition and reconstruction through simple linear difference and superposition operations without the need for up-sampling and down-sampling operations,so the computational efficiency is high.2.For the fusion of low light and infrared images,a fusion model based on image saliency measure and zero crossing count measure in CAST domain is proposed.Firstly,the model uses CAST as a multi-scale transformation method to obtain the component coefficient matrices of the base layer and the detail layers of the source images.For the base layer components that represent the image's energy characteristics,first perform brightness correction preprocessing,and the processed base layer components are then fused by the improved LatLRR operator to extract salient features and generate an adaptive weight map for fusion guidance to obtain the final fusion coefficient of this layer.This method makes up for the defect of brightness in weak light,and increases its overall contrast.At the same time,since the zero-crossing counting regularization method can promote the sparsity of the gradient,a variational model based on the second-order difference zero-crossing counting regularization is utilized as the fusion means of the detail level components.It can restore the gradient characteristics of the two source images to the fused result as many as possible,so that the edge details of the fusion image are clearer and the fusion position transitions naturally.Finally,it is verified through 7 sets of experiments that the algorithm has better visual effects for the fusion of infrared and visible light images from a subjective and objective perspective.3.Aiming at the problem of insufficient sparse ability of CAST for base layer components,an image fusion model based on measurement domain sparse representation and dual-channel SPCNN model in CAST domain is proposed.The new algorithm still uses CAST as a multi-scale analysis tool,First,the base layer image is mapped to the measurement domain,and it is divided into edge blocks,texture blocks and smooth blocks according to the characteristics of the measurement domain,and then classified and sparsely encoded by the classification sparse dictionary,and fused respectively according to the characteristics of the classified image blocks.For the detail layer components,the adaptive dual-channel SPCNN model is used as the fusion rule,which is optimized based on the traditional single-channel PCNN model,and uses the dual-channel model to process heterologous information.The neurons in the SPCNN model of the layer components are activated by the improved spatial frequency operator(ISF),and the model uses the sum of the improved weighted Laplacian energies(IWSML)as the adaptive linking strength.Finally,it is proved by 6 sets of experiments that the image fused by this algorithm has clear edge texture and high overall contrast,and at the same time conforms to the range of human visual perception,and the fusion effect is better.
Keywords/Search Tags:image fusion, co-occurrence analysis shearlet transform, adaptive dual-channel SPCNN, measurement domain sparse representation, zero-crossing counting regularization
PDF Full Text Request
Related items