Font Size: a A A

Reach For Joint Sparse Representation-based Infrared And Visible Image Fusion

Posted on:2016-10-28Degree:MasterType:Thesis
Country:ChinaCandidate:S S SongFull Text:PDF
GTID:2348330509454736Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
There are many differences between infrared and visible imagesregarding imageforming principle, spatial resolution, gray scaleand texture and edge characters.The fusion process can make full use of thecomplementary information between them. It is an important branch of the image fusion field, and has a great potential to be used in aviation, remote sensing and many other fields. The main purpose of this research is to solve the problem of infrared and visible image fusion. Based on fundamental theories of image fusion and the popular sparse representation algorithm, we proposes an innovative method of infrared and visible image fusion in the thesis usingsparse representation and joint sparse representation. The following are two primary points of the pioneering work of the thesis:(1) It proposes a novel image fusion method based on the over-complete theory of sparse representation. The sparse representation can represent the main structure and essential attribute of the image signal with limited nonzero coefficients. Thus, based on the two characteristics of sparse representation theory, which are the over-completeness of dictionary and the sparseness of expression coefficients,we carry out four steps to obtain the fusion result:firstly, dividing the source images into blocks using sliding window method;secondly, gettingthe over-complete dictionary by using these blocks;thirdly, usingthe classical OMP algorithm to obtain the sparse coefficient; andon the last step,takingthe big modulus value to get the sparse coefficient of the fusion image to complete the image reconstruction. As the dictionary is trained fromsource images, the self-adaptability and accuracy of the fusion method are enhanced. Experimental results have shown that the method can better fuse the target information of infrared images and the texture information of visible images. It suggests that the proposed method also has strong immunity to noise.(2)Joint sparse representation is proposed, based on sparse representation, in accordance withthe fact that the same scene images from different sensors haveboth common and own characters, whichbelongs to thelearning dictionary.A learning dictionary can be used to fit data finely, but it cannot analyze data at different scales.Nevertheless,the outstanding information of an image at different scalesneeds to be distinguished and retained.To solve this contradiction, this thesisproposes a fusion method combiningthe multi-scale analysis with joint sparse representation, which can both effectivelyrepresent the significant detail characteristics of source imagesandanalyze the detail information on multiplescales. Due to the different information under multi-scale transformation between high frequency and low frequency, we adopt the joint sparse representation method for low frequencyand apply the selection rule of the coefficient of characteristic product for high frequency. The experimental results have shown that the infrared target is more outstanding and the background part is clearer. The proposed methodhas better fusion effect than single scale learning dictionary and multi-scale analysis transform.
Keywords/Search Tags:Image fusion of infrared and visible image, Sparse representation, Over-complete dictionary, Transform multi-scale analysis, Joint sparse representation
PDF Full Text Request
Related items