| Infrared and Low-light-level visible image fusion is an enhancement technique whose main function is to fuse two types of image information captured by different sensors in the same scene to obtain an image with richer scene content.Since low-light-level visible images and infrared images have distinct differences in their respective imaging characteristics,they have good complementary properties.Visible images have more detailed texture information but are constrained by factors such as illumination,environment,and occlusion;Infrared images are imaged by thermal radiation and do not have clear detail texture but have significant thermal radiation target information and are not affected by factors such as illumination conditions.Therefore,the fusion of the two can make the fused image have more significant target information and clear background details.To address the problems of low contrast,loss of detail information and poor visibility in traditional fusion algorithms,this paper proposes two methods for fusing infrared and weak visible images.The main research contents and innovations of the dissertation are as follows:(1)Due to the low-light-level visible light image will have the defect of low contrast and poor visibility in poor lighting conditions,in such a case to ameliorate the result of infrared and low-light-level image fusion,a new fusion method for infrared and low-light visible light enhancement is proposed,which based on latent low-rank representation and composite filtering to address the problems of low contrast and poor visibility of weak visible images under poor lighting conditions.First,the visible image is enhanced by using the improved high-dynamic-range compression enhancement method to improve the luminance;then the infrared and the enhanced low-light-level visible image are respectively decomposed by using a decomposition method based on latent low-rank representation and composite filtering to obtain the corresponding low-frequency and high-frequency layers;The final fused images are obtained by fusing the low-frequency and high-frequency layers using the improved contrast-enhanced visual-saliency-map fusion method and improved weighted least squares optimization fusion method,respectively.The experimental results show that the fused image is rich in detail information,high definition and good visibility.(2)In order to fuse the nighttime visibility information of infrared images with the texture details and environmental information of low-light-level visible images to generate information-rich fused images,and to better maintain the two types of feature information from different sensors to prevent excessive information loss,an end-to-end deep learning network framework for infrared and low-light-level visible image fusion is proposed in this paper.First,to address the limitations of infrared and low-light-level visible datasets,some existing data are sorted,calibrated,and sliced to build a sufficient and valid dataset;after that,a codec fusion model is designed based on the U-net model,where infrared and lowlight-level visible images will be extracted by their respective coding blocks,and then,the features at each level are fused with each other and gradually convolved;Finally,a new hybrid loss function is designed and the fused feature images are reconstructed using a decoder.The experimental results show that the model effectively extracts and preserves the feature information of different sensor source images,and the fused images have the detailed background information of visible images and highlight the night visibility information of infrared images. |