The task of low-light image enhancement has been an important research branch in the field of computer vision.Images captured in low light and backlight conditions are characterized by low brightness and low contrast,and are accompanied by various degrees of degradation phenomena(noise,color imbalance).Simple enhancement of image contrast can make the hidden noise and color distortion fully exposed,affecting people’s subjective visual perception and the performance of other application scenarios.In recent years,many researchers have conducted research on low-light image enhancement from different perspectives,but the enhanced images often suffer from uneven exposure,strong noise,color imbalance,and loss of details.Therefore,in order to solve this problem,this paper conducts research related to low-light image enhancement tasks using deep learning methods.The research work in this paper is divided into the following parts.(1)A progressive dual-branch network is proposed for low-light image enhancement.An assisted recovery module is designed to exploit the hybrid correlation and feature complementarity between the inverse phase image and the low-light image.The feature information at different scales is extracted progressively by cascading multiple assisted recovery modules.Considering the execution efficiency of the network and the number of parameters,a deeply separable convolution and an asymmetric assisted recovery module are used to improve the computational efficiency of the model.In order to reduce the degradation caused by enhanced image contrast,the introduction of a large kernel attention block allows the network to emphasize hidden low-light information regions,effectively suppressing noise and improving color imbalance.In order to effectively fuse the feature information between the inverse phase image and the low-light image,the attention fusion block is designed.This block can effectively obtain the global feature information and recode the semantic dependencies between channels.Finally,the fusion reconstruction module is designed to further refine the feature information and enhance the information mobility between networks.After sufficient qualitative and quantitative experiments in publicly available low-light image datasets,it is known that our method has better visual quality and metric evaluation scores than other advanced low-light image enhancement methods.(2)By combining the advantages of local spatial perception of convolutional neural network and global spatial perception of Transformer,a two-stage perceptual enhancement Transformer network is proposed for low-light image enhancement.The method is generally divided into two stages: the feature extraction stage and the detail fusion stage.First,in the feature extraction stage,an encoder composed of the Transformer performs global feature extraction and expands the perceptual field.Since the Transformer lacks the ability to capture local features,the perceptual enhancement module is introduced to improve the local and global feature information interaction.Second,between the corresponding encoding and decoding blocks of each layer,a feature fusion block is introduced to compensate the feature information at different scales to improve the reusability of features and enhance the stability of the network.In addition,between the two stages,the local information features are redistributed and the network supervision capability is improved by introducing a self-calibration module.In the detail fusion stage,a detail enhancement unit is designed for recovering the enhanced image with high resolution in order to further preserve the texture feature details of the image.Through qualitative comparison and quantitative analysis,it outperforms other low-light image enhancement methods in terms of subjective visual effects and objective metrics values. |