Font Size: a A A

Research On Infrared And Visible Image Fusion Algorithms Based On Modal Features

Posted on:2023-07-05Degree:MasterType:Thesis
Country:ChinaCandidate:X K KongFull Text:PDF
GTID:2568306806473234Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the rapid development of sensor technology,people have more and more access to information.Different sensors can reflect different aspects of information in a scene and can help people understand the nature of an object more fully.Although complementary information exists between different sensor images,redundant information inevitably exists between source images.Therefore,image fusion techniques aim to fuse two source images into a single image,and image fusion techniques aim to remove redundant information and retain complementary information.Infrared sensors capture the thermal radiation information in a scene,so infrared images have a high contrast and are able to separate prominent targets from the background.Visible sensors capture light reflections in the scene,so visible images are richer in texture information.Infrared and visible image fusion therefore requires the full integration of these two complementary pieces of information to obtain a single image.Traditional image fusion algorithms commonly decompose images into high-frequency information,which reflects the details,textures and edges in the image,and low-frequency information,which mainly reflects the intensity distribution of pixels in the scene.This decomposition method does not take into account the totality of edge information.In contrast,deep learning methods either do not take into account the feature differences between different modal images or the information interactions between different feature information,leading to problems such as loss of texture information in fused images and low contrast of salient targets.Therefore,in response to the shortcomings of the traditional and deep methods,this thesis proposes two new infrared and visible image fusion schemes,and the main innovative work of the thesis is as follows:1)A fusion scheme based on an modified side window filtering(MSWF)and intensity transformation function(ITF)is proposed to address the problem of loss of detail in the fused image due to the simple decomposition of the image into high and low frequency components in traditional algorithms.Firstly,we construct MSWF by adding four new filter kernels to the side window filtering to improve its edge-preserving capability,and use MSWF to decompose the image into a base layer and a detail layer.Due to the edge-preserving effect of MSWF,the edge information is retained in the base layer.Then,we extract the high and low frequencies of the base layer using the multi-scale and multi-directional decomposition tool non-subsampling shearlet wave transform(NSST).So in that the edge information is decomposed into the high frequency layer.Subsequently,we propose an S-shaped ITF for enhancing salient information and suppressing non-salient information in the infrared images.In the fusion process,considering the characteristics of each decomposition component,we designed different fusion rules to obtain the fused detail layer and the fused low and high frequency layers.The final fused image is obtained by inverse NSST as well as inverse MSWF.The effectiveness of the proposed method is demonstrated in the public datasets TNO and OSU.2)At present,most deep learning-based fusion methods directly fuse the features of these two modal images,without fully considering their specific attributes,which causes the fusion image to be more inclined to contain the features of a certain modality.In this thesis,a twostage feature transfer and supplement fusion network(FTSFN)is proposed for infrared and visible image fusion.In the first stage,a feature transfer network(FTN)is proposed to reduce the domain gap between the two modalities by transferring one modal features to another ones Based on the constructed FTN and the input images,two networks,FTN_IR and FTN_VIS,are pre-trained to obtain the enhanced infrared and visible features.In the second stage,a feature supplement fusion network(FSFN)is built by constructing two network branches with shared weights to achieve the fusion of the enhanced features.In FSFN,two feature supplement modules,intensity-based feature supplement module(IFSM)and gradient-based feature supplement module(GFSM),are designed to effectively complement the intensity and texture information of the two enhanced features mutually.In addition,to better guide the training of the FTNs and FTSFN,different loss functions are defined by exploiting the domain features of the source images.Massive experiments on the widely used fusion datasets have also verified the effectiveness and superiority of the proposed FTSFN.
Keywords/Search Tags:infrared and visible image fusion, modified side window filtering(MSWF), intensity transformation function (ITF), domain gap, feature transfer network (FTN), feature supplement and fusion network(FSFN)
PDF Full Text Request
Related items