| Low-light image enhancement is a crucial research direction in the field of computer vision.Due to the short exposure time and low lighting intensity,low-light images often suffer from problems such as low contrast,dull colors,and significant noise.These issues not only affect the quality of the image and human visual perception,but also pose challenges for downstream computer vision tasks.This article focuses on the research of deep learning-based low-light image enhancement models.Through in-depth discussion of enhancement methods based on supervised learning,zero-reference learning,and normalizing flow models,the aim is to improve the quality of low-light images,including increasing image brightness,restoring color,and reducing noise.To address issues such as color distortion and uneven enhancement in current low-light image enhancement methods based on convolutional neural networks,this paper proposes a new supervised learning-based model,named LLU-Swin.The proposed method uses a Transformer module with residual connections,called RRTM,as the encoding/decoding module.The RRTM module contains several improved Swin Transformer modules,named DLTB,and a convolutional layer,which guides the network with residual connections while aggregating Transformer network features using convolutional layers.Additionally,LLU-Swin uses D-Le FF,a locally connected feed-forward network with dilated convolutions,to increase the receptive field without changing the feature map size.The combination of these two modules fully exploits the advantages of convolutional neural networks and Transformer networks,achieving long-range dependency capture while enhancing local features.Quantitative and qualitative experiments on the LOL test set demonstrate that LLU-Swin has excellent performance in low-light image enhancement,with PSNR and SSIM metrics reaching19.48 and 0.86,respectively,providing a beneficial exploration of Transformer networks in the field of low-light image enhancement.To address the limited generalization ability of supervised learning methods on real unlabeled datasets,as well as the lack of consideration for noise in most enhancement methods,this paper proposes a low-light image enhancement model based on the Retinex theory called Lighten Conv.Lighten Conv is a zero-reference learning model that uses a Conv Next V2 network as the backbone network to decompose the input image into reflection and illumination components,and applies gradient decomposition in both horizontal and vertical directions to obtain the noise component.Through a multiple combination loss function,the model can enhance the brightness of the input image while reducing its noise without requiring paired datasets.Experimental results show that on the MEF and NPE datasets,the Lighten Conv model achieves NIQMC scores of 4.7223 and 5.3223,and CPCQI scores of 0.9991 and 0.9485,demonstrating high quality and robustness in image enhancement.To address the problems of naturalness deficiency in pixel-level loss-based data-driven low-light image enhancement methods in real-world scenarios,this paper proposes a low-light enhancement model called Lighten Flow which based on normalizing flow technology and a fusion attention mechanism.The model computes the Jacobian determinant and inverse transformation of different-level features to simulate the distribution of normal lighting images,thereby increasing the naturalness of the enhanced image while achieving brightness enhancement of the input image.Experimental results demonstrate that the proposed method can effectively process low-light images in complex environments,with image quality evaluation metrics PSNR,SSIM,and LPIPS reaching 39.16,0.986,and 0.016,respectively,on a synthetic low-light dataset,and NIQE and CPCQI reaching 4.0222 and 1.1568,respectively,on a real-world complex low-light dataset,with the generated images exhibiting higher naturalness.These results suggest that the proposed method has broad potential applications. |