| With the gradual maturity of information technology,information fusion is widely used because of its advantages of combining information to maximize features and reduce redundancy.Especially after the introduction of information fusion theory into the field of image processing,the direction of image fusion has gradually formed.Image fusion is a branch of multi-sensor data information fusion technology,which is the process of extracting and fusing various features from two or more images into one image.Infrared and visible light image fusion is one of the main hotspots in the field of image fusion technology.The infrared image distinguishes the target and background according to the thermal radiation,which can better reflect the difference between each target object under all-weather conditions,but cannot reflect the real scene due to the influence of the background temperature difference.Visible light images use the human visual system to provide high resolution and clarity of texture detail at the right light levels.However,when the illuminance is low,the visible light image has poor contrast,limited gray level,poor instantaneous dynamic range,and flickering at high gain.Using the spatial and temporal correlation of the two source images and the complementarity of information in scene description,the fusion image has a more detailed and comprehensive representation of the scene,which is more conducive to human eye recognition and machine automatic detection.Aiming at the existing problems in infrared and visible light image fusion algorithms,two improved image fusion algorithms are proposed by comparing the multi-scale method with the deep learning method.Then the experimental results are evaluated subjectively and objectively,in order to retain more source image information in the fusion image and reduce distortion.The two specific algorithms are as follows1)Infrared and visible light image fusion method based on RGF and pulse coupled neural networkThe background information of the source image acquired by infrared and visible light sensors is highly complex,resulting in complex target edge contours,large spatial correction errors,and poor fusion results.An improved multi-scale fusion method of rolling guided filter and impulse coupled neural network is proposed.First,the source image is decomposed by rolling iteration based on guided filtering;Then,combined with Gaussian filter,the small-scale structure of each sub-band of the decomposed image is smoothed,and the large-scale structure edge is restored.For low-pass images,the Kirsch function is used for edge detection to generate saliency maps,and the low-pass images and saliency maps are fused by a class-normal right-shift weight guide to enhance the clarity of structural edges.For high-pass images,the pulse-coupled neural network makes decisions through the ignition matrix to guide the fusion,and finally the processed high-pass and low-pass images are reconstructed to obtain a fusion image.Through the subjective and objective evaluation of the fusion results,it can be seen that the algorithm improves the depiction of the overall details and texture of the image,and improves the scene recognition to a certain extent,which is more in line with the human visual characteristics in terms of subjective observation effect.2)Hybrid Multiscale Infrared and Visible Image Fusion MethodA hybrid multi-scale infrared and visible image fusion algorithm is proposed to solve the problems of easy loss of detail information,low contrast and poor adaptability of images in complex environment in multi-scale decomposition.Firstly,the improved rolling guide filter combined with Gaussian filter is used to optimize the high-pass and low-pass images on the size scale respectively.Then,the improved Kirsch function is used to detect the12-direction edge to generate the saliency image of the low-pass image.The NL-Res Net Network was designed to construct the weight diagram and guide the weighted fusion between the low-pass image and the salient image to enhance the clarity of the structure edge.The high pass image is fused with an improved pulse-coupled neural network.Finally,the processed high-pass and low-pass images are reconstructed to obtain the fusion result.The experimental results show that the algorithm is more consistent with the observation of the actual scene by the human visual system in terms of visual effect,which improves the scene identification and environmental interpretation ability on the whole,and further deepens the observer’s understanding of the scene. |