Font Size: a A A

Research On Infrared And Visible Image Fusion Based On Multi-level Information Extraction And Transmission

Posted on:2022-11-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q Q LiFull Text:PDF
GTID:1488306764998969Subject:Computer Software and Application of Computer
Abstract/Summary:PDF Full Text Request
Image fusion is a key technology in the field of computer vision.It aims to combine the images captured by different modal sensors in the same scene to enhance the understanding of the scene.The key problem of image fusion is to adequately extract complementary information from source images and combine them to generate fusion images with significant intensity information,clear edge contour,rich detail texture and good visual quality.Infrared sensors can capture images in night or camouflage environment according to the thermal radiation information of the target,and work well in day/night and allweather conditions.However,the images obtained by infrared sensors generally have low resolution and lack detailed texture information.Visible sensors can capture rich background information through the spectral reflection information of the object,and visible images has high resolution.However,visible sensors are easily affected by illumination variations and severe weather.Due to the strong information complementarity between infrared and visible images,the fusion of them can describe the scene information more comprehensively and is widely used in machine vision tasks such as target detection,recognition and tracking.This paper mainly researches the infrared and visible image fusion.From the perspective of multi-level information acquisition,this paper explores the transmission and combination of different level information,so as to realize the high-quality fusion of infrared and visible images.The main research contents and novelties of this paper are described as below:(1)Infrared and visible image fusion guided by multi-level low rank decomposition and saliencyWhen designing fusion rules,some traditional image fusion methods tend to focus on the extracted features but ignore the salient semantic information and some texture contour features of input images,resulting in the problems that target area is not salient and the edge is not clear in the fused image.To tackle these problems,this paper proposes an infrared and visible image fusion method guided by multi-level low rank decomposition and saliency.Firstly,in order to facilitate the design of fusion rules,this paper employs the latent low rank representation to decompose the source image into base level containing intensity information and detail level containing texture information.Then,in the base level fusion stage,in order to improve the pixel intensity of the salient target area and visual quality of the fused image,this paper constructs a fusion strategy guided by the saliency map of the source image.In addition,in order to enhance the richness of texture information and the definition of edge contour,gradient information is introduced in detail level fusion.Finally,the final fusion result is derived by reconstructing the fused images of base level and detail level.A series of ablation experiments prove the rationality of innovation modules of the proposed fusion method.The proposed method is compared with 9 fusion algorithms on three public datasets.Experimental results and analyses demonstrate that our method can achieve satisfactory fusion performance and transcends other traditional and state-of-the-art algorithms in terms of qualitative and quantitative comparisons.(2)A multi-level hybrid transmission network for infrared and visible image fusionConvolutional neural network is gradually applied in the field of infrared and visible image fusion due to its superior feature extraction ability.However,most deep learning image fusion methods ignore the communication between different level features,resulting in the lack of fused image information.In addition,some deep learning networks still rely on designing complex fusion rules to ensure the quality of image fusion,which makes image fusion become a computationally demanding task.In order to solve the above problems,this paper proposes a multi-level hybrid transmission network for infrared and visible image fusion.The network is mainly composed of multi-level residual encoder module and hybrid transmission decoder module.In the encoder process,considering the great differences between infrared and visible images,two independent residual encoder branches are designed to extract the multi-level features of infrared and visible images.In order to reduce the complexity of the fusion network,the concatenate-convolution operation is adopted to integrate the features of the same level to replace complicated fusion rules,and the fusion feature maps of different levels are obtained.In the decoder process,for the sake of improving the information richness of the fused image,this paper constructs a hybrid transmission decoder module to make full use of different level features.The module includes two parts: cross transmission and skip transmission.The purpose of cross transmission is to make features of different levels complement each other;skip transmission aims to remedy the information loss in the decoder stage,so as to transmit more features to the fusion results and improve the quality of the fused image.This paper tests the performance of the proposed fusion method on three public datasets.A series of subjective and objective experimental results illustrate that the proposed method can not only achieve high-quality image fusion,but also run quickly and efficiently.(3)A multi-level sparsely dense connection network for infrared and visible image fusionThis paper further explores the transmission mode between different level information,and presents a multi-level sparsely dense connection network to realize infrared and visible image fusion.Densely connected convolutional network can connect the output of each level to all subsequent layers,which effectively enhances the feature transmission between different levels.However,too dense connection between the network structures of each level will inevitably increase the complexity of the fusion model.In addition,too much feature reuse will also cause a certain degree of feature redundancy and reduce the performance of image fusion network.Considering these problems,a multi-level sparsely dense connection structure is designed in this paper.In the proposed fusion network,the densely connected convolutional network is simplified and replaced by an interval transmission way,so as to enhance the communication between multi-level information and reduce the model parameters and redundant features.This paper tests the performance of the proposed fusion network on three public datasets.A number of experiments show that the proposed algorithm can run at a faster speed while ensuring high-quality image fusion.
Keywords/Search Tags:Infrared and visible, image fusion, low rank decomposition, saliency, hybrid transmission, sparsely dense connection
PDF Full Text Request
Related items