Font Size: a A A

Study On Infrared And Visible Image Fusion Based On Multi-scale Information Separation In Frequency Domain

Posted on:2024-07-17Degree:MasterType:Thesis
Country:ChinaCandidate:S ZongFull Text:PDF
GTID:2568307166472064Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
Infrared and visible image fusion aims at feature extraction and fusion of multiple source images,retaining complementary information and suppressing redundant information to generate a fused image containing comprehensive feature information,which is widely used in national defense,remote sensing,medicine,bio-detection and other fields.In recent years,the most common approaches are image fusion networks based on autoencoders.In these models,the image is first subjected to feature encoding operation to fuse feature information in the potential space,and the fused image is later reconstructed by a decoder for achieving satisfactory fusion results.However,information loss in the encoding stage and the use of simple unlearnable fusion strategies are two major challenges for such models.To enhance the details of fused images and improve the effectiveness of fusion,this thesis focuses on infrared and visible image fusion methods based on multi-scale information separation in the frequency domain,with the following main contributions.(1)For the information loss problem in the encoding stage in the autoencoder model.An auto-encoder model based on multi-scale decomposition in the frequency domain is proposed.A filter is used to convert the source image into three branches of feature information: high frequency,low frequency and medium frequency,and different feature extraction networks are used to extract the corresponding feature information respectively.Then,after fusing the feature information,a decoder is used to reconstruct the fused image.(2)To improve the mechanical decomposition in the multiscale decomposition model,a contrast learning-guided information separation method for infrared and visible image fusion is proposed.Using contrast loss in contrast learning idea to guide the feature extraction process of the coding process improves the complementarity of feature information at different scales of the source image,enhances the information effectiveness,and completes the multi-scale feature information separation process.Finally,the whole fusion model is jointly trained by combining detail retention loss,feature enhancement loss and contrast loss to obtain good detail retention.(3)To improve the limitations of simple fusion strategies on fusion performance,more targeted fusion methods are used.An interactive residual attention fusion strategy is proposed in the important high-frequency detail branch.The learnable coordinate attention module can adaptively fuse the critical detail information based on the features of the corresponding feature maps.And in the low and medium frequency feature branches,a simple traditional fusion strategy is used to improve the efficiency of fusion.The extensive experimental results validate the feasibility and effectiveness of the proposed model,which achieves the superior performance to other deep learning algorithms in terms of the feature extraction and fusion process.
Keywords/Search Tags:Image fusion, Multi-scale decomposition, Contrast loss, Attention mechanism, Adaptive fusion strategy
PDF Full Text Request
Related items