| To achieve complementary information in the same scene,infrared and visible image fusion technology can be used to obtain an image with more comprehensive scene target information.Currently,infrared and visible image fusion techniques have been widely and intensively studied in the fields of power equipment monitoring,resource exploration,biomedicine,face recognition and spatial exploration.The edge-preserving filter in multi-scale transform has features such as edge-preserving smoothness,edge recovery and low complexity.Deep learning has powerful feature extraction capability.In this paper,the following two infrared and visible image fusion algorithms based on the combination of multiscale transform and deep learning network are proposed to address the problems of unclear edge details and low brightness and contrast that occur in infrared and visible image fusion:(1)An infrared and visible image fusion method based on guided filtering and VGGNet19 network is proposed to address the problems of low contrast and artifacts at the edges of fused images.Guided filtering is an important multiscale transform method with the characteristics of edge-preserving smoothing and low complexity,while the VGGNet19 network improves the network performance by deepening the structure and extracting more image features at a deeper level.By combining deep learning with multi-scale transformation methods,the structural information of the source images can be better extracted and the detailed information of the scene texture in the source images can be effectively retained to improve the quality of image fusion.Firstly,the decomposition is carried out by guided filter to obtain the base layer containing large scale information of the source images and the detail layers containing detail information such as texture.then,the base layer is fused using Laplace energy to obtain the base fusion map,and the detail layers are fused by VGGNet19 network for multi-layer feature extraction,L1 regularisation,up-sampling and the final weighted average strategy to obtain the detail fusion map.Finally,the final results are obtained by summing.The experimental results show that this method solves the problems of low contrast and artifacts at the edges of the image fusion algorithm,and gives the fused image a more realistic visual effect.(2)An infrared and visible image fusion based on fast alternating guided filtering and deep learning network CNN(Convolutional Neural Network)is proposed for partial loss of edge and detail information in some infrared and visible image fusion algorithms.A fast alternating guided filter is proposed to effectively improve the operational efficiency while ensuring the quality of the fused image,combined with CNN network and infrared feature extraction effective fusion.Firstly,quadtree decomposition and Bessel interpolation is used to extract the infrared brightness feature of the source images,and the initial fusion image is obtained by combining the visible image.Secondly,the information of the base layer and the detail layer of the source images are obtained through fast alternating guided filtering.The base layer obtains the fused base image through CNN and Laplace transform,and the detail layer obtains the fused detail image through the saliency measurement method.Finally,the initial fusion map,the basic fusion map and the detail fusion map are added to obtain the final fusion result.The experimental results show that this method retains more of the main energy information in the source images,extracts more details and texture information of the source images,and has clear edges.This paper designs an infrared and visible image fusion system based on multiscale transformation and deep learning.The subjective and objective indicators of algorithm in this paper are compared and analysed with the other seven comparative algorithms.It is intuitively demonstrated the advantages of our algorithm and showed the application performance of the infrared and visible image fusion system. |