Font Size: a A A

Research On Single Image Super-resolution Reconstruction Of Based On Deep Learning

Posted on:2022-06-21Degree:MasterType:Thesis
Country:ChinaCandidate:J J ZhouFull Text:PDF
GTID:2518306488985809Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Due to the actual conditions such as transmission bandwidth and hardware equipment,the image resolution presented on the terminal display device cannot meet people's visual needs,especially for medical images and satellite remote sensing fields that require high definition and high detail.In recent years,with the rapid development of deep learning and big data technology,deep learning has been widely used in image super-resolution reconstruction.As image super-resolution is a one-to-many irreversible pathological problem,it is all about recovering high-resolution images with rich details from one or more low-resolution images.However,most of the current algorithms simply stack the convolutional layers,which leads to excessive model parameters,artifacts and excessive smoothing in the reconstructed image.In order to further solve the above problems,two image super-resolution reconstruction algorithms based on deep learning and attention mechanism are proposed in this paper,which focus on the problems of image super-resolution reconstruction.Most of the current approaches either only consider channel item attention or only consider self-attentive mechanisms to model long-range dependencies,which leads to an increase in model parameters and memory consumption,thus hindering the characterization and deployment of CNNs on edge devices.Therefore,this paper proposes a super-resolution algorithm based on the Dual Residual Global Context Attention Network(DRGCAN)model.(1)Based on the traditional double residual structure,we study and design improved double residual units suitable for image super-resolution tasks.(2)Cascade multiple residual blocks constructed from the improved double residual units to control the width of the neural network.(3)Introduce a global attention module into the residual set(DRIR)cascaded from multiple residual blocks to efficiently model the long-range dependence to improve the network characterization.(4)Stacking multiple DRIRs to form the DRGCAN backbone network and fusing the sub-pixel convolution module achieves the mapping of shallow coarse features to deep fine features and expands the receptive field of the network,fully enhancing the communication of features at all levels.(5)Experimental analysis on five benchmark datasets shows that the proposed model achieves competitive results in terms of visual quality and memory consumption.Most of the existing CNN-based SISR network architecture designs only take channel or spatial information into account,and cannot fully utilize both channel item information and spatial information to improve the performance of SISR.In order to solve this problem,we propose an algorithm called Mixed Attention Densely Residual Network(MADRN),which can use both channel item information and spatial information to improve the representability of network features.(1)Cascade multiple residual blocks to form a residue group structure to enable the model to learn missing high-frequency information intensively.(2)The multiple residue group structure is densely connected to achieve multi-level feature reuse and avoid learning redundant features to enhance the model performance.(3)A Laplacian Spatial Attention mechanism is designed to enable the model to exploit the potential relationships between spatial features in super-resolution images to produce a more accurate visual experience.(4)Design a mixed attention module based on the Laplacian Spatial Attention and Channel Attention mechanisms,and introduce it into each dense residual group to enable better adaptive focus on the learning of valuable features.(5)Stacking multiple dense residue groups to form a backbone network and introducing a sub-pixel convolution module for accurate image reconstruction are shown to have comparable performance with SOTA on both qualitative and quantitative results through a large number of experiments.
Keywords/Search Tags:Deep Learning, Image Super-Resolution, Attention Mechanisms, Densely Residual Connections, Multi-Scale Features
PDF Full Text Request
Related items