Font Size: a A A

Research On Image Deblurring Methods Based On Deep Learning

Posted on:2024-06-18Degree:MasterType:Thesis
Country:ChinaCandidate:C YuFull Text:PDF
GTID:2568307079454584Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Image blur is a common form of image degradation that may occur in any application area requiring imaging devices,among which defocus blur is caused by the object being out of the digital camera’s focal plane.Due to the limited depth of field of the camera,defocus blur is inevitable when the shooting scene has significant depth variation.Different areas in defocused images may have varying degrees of blur,making it a challenging task to obtain sharp images from blurred images.Dual-pixel technology captures more optical information by using two independent photodiodes,helping optical devices achieve autofocus,and the difference information hidden in the left and right subviews captured has received attention in recent years for depth estimation and image deblurring tasks in computer vision.To solve the single defocus image restoration problem,this paper proposes SDFRNet,an improvement on the encoding,bottleneck,and decoding layers of the U-Net[1]architecture.First,considering the spatial variation of blurred and clear areas in defocused images,a residual spatial attention module is designed as the network’s basic building block.Second,to avoid the loss of feature map information caused by traditional pooling downsampling and deconvolution upsampling,the paper utilizes the lossless reversibility of the discrete wavelet transform,proposing downsampling based on the discrete wavelet transform and upsampling based on the inverse wavelet transform.Next,since the encoding layer in the U-Net[1]architecture raises the input feature map to high dimensions,multiscale local convolution is introduced into the bottleneck layer to better utilize high-dimensional feature map information.Finally,compared to the single defocus image input version of DPDNet[2],SDFRNet achieves significant improvements in various performance indicators,including a 67%reduction in parameter count,while increasing PSNR,SSIM,MAE,and LPIPS by 1.61d B,4.66%,16.12%,and 13.39%,respectively,in the indoor scenes of the DPDD[2]test set.To solve the dual-pixel defocus image pair restoration problem,this paper uses the left and right sub-images captured by the dual-pixel technology as network inputs and proposes DPDFRNet,which is based on SDFRNet with targeted improvements.Since the left and right sub-images exhibit complex differences in the blurred regions,and the blurred regions exhibit spatial variation with different degrees of blur,this paper first improves the residual spatial attention module from a multiscale perspective and introduces a residual nested structure,allowing the network to learn multi-level range spatial variation feature information from multi-scale information in multi-level residuals.Second,the paper introduces atrous convolution in the feature transformation stage,losslessly enlarging the receptive field and increasing the network’s perception of larger area features.Finally,compared to the dual-pixel image pair input version of DPDNet[2],DPDFRNet demonstrates a clear advantage,with a 40.73%reduction in parameter count,while increasing PSNR,SSIM,and MAE by 1.29d B,2.24%,and 17.24%,respectively,in the indoor scenes of the DPDD[2]test set.
Keywords/Search Tags:Deep Learning, Image Deblurring, Attention Mechanism, Residual Network, Multi-scale Information, Dilated Convolution
PDF Full Text Request
Related items