Traditional deblurring tasks focus on estimating the fuzzy kernels,existing restoration techniques explicitly require a large number of natural image priors which are hand-crafted through empirical observations to limit the solution space.However,designing a priori features is a challenging task,and the models designed are often not generalizable.Therefore,all the existing methods still have many problems before they could be generalized and used in the deblurring task,it is difficult to learn the texture details in images,and in most previous works,the restored images have some deficiencies such as inadequate detail information,indistinct image edge,etc.Meanwhile they also spend a lot of time.To address the above problems,a novel method of multi-scale restoration based on generative adversarial network(GAN)for images motion deblur is proposed by combining with deep learning and adversarial learning techniques.The method uses a multi-scale cascade network structure to directly learn the mapping relationship between blurred degraded images and clear images,omitting the step of estimating the blurred kernel;Meanwhile,it improves the residual block structure and adds a parallel hole convolution module to fuse the feature information of multi-scale perceptual fields,making the acquired feature information richer.In the meantime,a channel attention module is added to strengthen the effective feature weights and suppress the invalid features by explicitly modelling interdependencies between channels.The proposed algorithm is comprehensively evaluated on some datasets in terms of several indicators including peak signal to noise ratio(PSNR),structural similarity(SSIM)and restoration time.The overall results indicate that the proposed method in this paper can effectively eliminate the blurring phenomenon in the image,and it can restore the blurry images with different sizes of blur kernels to a certain extent.The experimental results show that the PSNR index of the deblurred images generated by this method is improved by at least 3.8% compared with other methods.Moreover,restored images obtained by our method have clearer edges,as well as.It is also found that while applying the restored images to the YOLO-V4 object detection task,the results have been significantly improved regarding both the categories identified and the confidence coefficient.Secondly,the multi-scale of encoder-decoder network can propagate the semantic information of the context well,but it lacks in the recovery of accurate spatial details.To address this issue,a simplified multi-scale network structure is proposed,in the meanwhile,the channel attention is supplemented by adding a spatial attention module,and a global attention module is added to fuse the deep and shallow features which can guide the shallow features through the deep features,and to stabilize the information flow of the whole stage.The experimental results show that the PSNR and SSIM metrics of the improved model improve by 1.5% and 1.1% respectively compared to the multi-scale codec structure,and the number of parameters of the generator has also decreased 45.9%.The experimental results show that the coarse-to-fine recovery strategy and the residual network structure adopted in this paper can effectively improve the recovery speed,and at the same time,the recovered images have clearer edges and richer detail information.The addition of the spatial attention module and the global attention module,which can be complemented by the encode-decoder network,enables the network to achieve a good balance between propagating high-level contextual semantic information and achieving accurate spatial details in the recovery process. |