Font Size: a A A

Research Of Infrared And Visible Image Fusion Algorithm Based On Generative Adversarial Network

Posted on:2024-08-04Degree:MasterType:Thesis
Country:ChinaCandidate:J H XiaoFull Text:PDF
GTID:2568307136496154Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Infrared and visible image fusion aims to integrate complementary features in different modal images of the same scene to generate fusion images with clear texture information and prominent target outline,so as to achieve a more complete and accurate description of complex scenes,which has a wide application prospect in military,monitoring security,and transportation fields.The fusion of infrared and visible images based on generative adversarial network is a hot research topic at present,and a series of satisfactory fusion results have been obtained.However,there are still some problems,such as incomplete feature extraction,texture information loss and unable to effectively highlight typical features.In order to solve these problems,this paper proposes three kinds of generative adversarial networks for infrared and visible image fusion.The main research contents and innovations are as follows:(1)A fusion method for generative adversarial networks based on Laplacian pyramid is proposed.Aiming at the problems of incomplete feature extraction,loss of texture details and lack of training stability in the existing methods,a generator composed of shallow feature extraction module,Laplacian pyramid module and reconstruction module was constructed to gradually extract multiscale features.At the same time,the attention module is used to highlight the salient features of the source image effectively.Then,two discriminators were used to identify the fused image and two different modes.In addition,to improve the stability of adversarial learning,a kind of pre-fused image provided by auxiliary network is proposed for auxiliary supervision loss.Extensive experiments show that the proposed algorithm has certain competitiveness in qualitative and quantitative evaluation compared with the existing seven deep learn-based fusion methods.(2)A fusion method based on cross-scale pyramid attention generative adversarial network is proposed.Aiming at the problem that the existing algorithms can not fully extract the complementary information of images of different modes,this paper uses pyramid module and fusion module to integrate the complementary features of infrared and visible images at the same scale.In order to effectively highlight the significant features in multi-mode images,a cross-scale pyramid attention module is designed to further strengthen the correlation between channel features of adjacent pyramid levels.At the same time,a long short-term memory network is introduced to fully learn global context information to avoid the problem of loss of details.Compared with seven representative fusion algorithms,the proposed method has certain advantages in qualitative and quantitative evaluation.(3)A fusion method based on pyramid pool fusion to generative adversarial network is proposed.Aiming at the problem that the existing fusion algorithms cannot fully integrate the complementary information of different scales,a pyramid-pool fusion module is designed in this paper.The feature maps with different scales are obtained by the operation of spatial pyramid pooling on the source image features of corresponding scales,and the features of the same scale are fused after pooling to ensure that the complementary feature information in the images of different modes is fully fused.In addition,CBAM attention mechanism is used to effectively optimize pyramid features after different levels of fusion,further improving the quality of image fusion.The results of quantitative and qualitative experiments on three public data sets show that the proposed algorithm is superior to seven existing typical contrast fusion methods.
Keywords/Search Tags:Infrared and Visible Image Fusion, Generative Adversarial Network, Laplacian Pyramid, Attention Mechanism, Pyramid Pooling, Long Short-Term Memory Network
PDF Full Text Request
Related items