A single sensor can only obtain information of single-mode scene,which has certain limitations.However,with the continuous development of technology,there are more and more kinds of sensors,and people can obtain images taken by multiple sensors in the same scene.Image fusion technology fuses the same scene image taken by multiple sensors in multiple directions and angles to reduce redundant information and obtain good visual effects and rich details.Visible images clearly reflect the details and background information of the scene.However,in some specific circumstances,such as low light,fog and other environmental conditions,the target is difficult to observe in visible images.Infrared images reflect the intensity difference between the target and the background,and can easily capture the target information.However,the image obtained by the infrared sensor lacks detailed information.Infrared and visible image fusion technology is an important technology combining the complementary information of infrared and visible image,which has been widely used in target recognition,remote sensing,military reconnaissance and other fields.In the field of infrared and visible fusion,common traditional methods mainly include the methods based on multi-scale decomposition and the methods based on sparse representation.Methods based on sparse representation are time-consuming.However,it is still a challenge for multi-scale decomposition methods to choose flexible basis functions to decompose source images,and the fusion images obtained by these methods retain the noise in the infrared image,resulting in poor visual perception by human eyes.The fusion method based on deep learning has high computational efficiency and short time,but the training process is complex and unstable,and it is easy to lose the brightness of infrared targets and the details of visible images.This thesis studies the above problems and puts forward new solutions.The main innovations of this thesis are as follows:Aiming at the problems existing in the traditional multi-scale decomposition method,this thesis proposes a novel infrared and visible image fusion method based on dual-kernel side window filtering and detail optimization with S-shaped curve transformation.First,a dual-kernel side window box filter(DSWBF)with adaptive filter kernel size is designed to extract the base and detail layers of the source images.Then,a saliency-based fusion rule is proposed for the base layers to highlight the salient regions of the infrared image.Next,a detail optimization module is constructed based on an S-shaped curve transformation and guided filter to optimize the detail layer of the infrared image.The optimized detail layer is then merged with the detail layer of the visible image to obtain the fused detail layer.Finally,the fusion image is created by reconstructing the fused base and detail layers.Experimental results on the two public datasets indicate that compared with state-of-the-art methods,the proposed method can obtain better fusion performance in terms of subjective and objective evaluations.Aiming at the problem of preserving noise in the traditional multi-scale decomposition methods,and the problem of losing target brightness or detail information in the deep learning methods,a fusion method of infrared and visible images combining the traditional multi-scale decomposition and deep learning method is proposed.To be specific,firstly,the source image is decomposed by the combination of rolling guidance filter and Gaussian filter to obtain the base layer and detail layer.Then,in terms of base layer,we use Nest Fuse network to effectively fuse target information of infrared image base layer and background information of visible image base layer.Then,in detail layer,we use the side window box filter to optimize the detail layer of the infrared image,reduce the noise information in the detail layer of the infrared image,and then fuse it with the detail layer of the visible image to obtain the fused detail layer.Finally,the fusion image is reconstructed by using the fusion base layer and detail layer.Experimental results show that this method not only reduces the influence of noise on the fused image,but also achieves better results both objectively and subjectively than Nest Fuse network alone. |