Font Size: a A A

Research Of Image Super-resolution Based On Recursive Residual Networks

Posted on:2020-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:L J ZhaoFull Text:PDF
GTID:2428330578968583Subject:Engineering
Abstract/Summary:PDF Full Text Request
In recent years,deep convolutional neural networks have achieved remarkable results in single image super-resolution reconstruction,but most of them improve performance by deepening the network.As the number of network layers deepens,the number of parameters,memory space,and computational complexity increase.Limited by computing power,memory space,and power consumption,these methods are of little value in resource-constrained devices such as mobile or embedded.This paper proposed a new type of recursive residual network,which aims to build a network model with more compact structure,less model parameters and lower computational complexity without loss of image restoration quality,and improve the practical value of single image super-resolution method on resource-constrained mobile or embedded devices.Specifically,the main contributions of this method are as follows:(1)Introducing local residual learning to deliver more image information.In VDSR,the residual image is estimated from the input and output of the network,called global residual learning.In addition,very deep netw orks may experience performance degradation problems because images lose a lot of detail after multiple layers of transmission.To solve this problem,the proposed method introduces an enhanced residual block structure for local residual learning,in which the identity branch can not only transmit the deep image details to the back layer of the network,but also contribute to the flow of the gradient.(2)The recursive structure of the residual block is used to reduce the number of parameters.In this method,all residual blocks are added with the feature map extracted by the first layer of convolutional layer,so the identity branches of each residual block are the same.Thus,the recursive structure is introduced into the residual block to form a recursive residual network.And the weight is shared between these residual blocks,the number of parameters is greatly reduced,and the model structure is more compact.(3)Reduce computational complexity through deconvolution operations.The computational complexity of the network is proportional to the size of the input image.The method in this paper uses the original uninterpolated low-resolution image as the input image,while some super-resolution methods use the bi-quasi-interpolated low-resolution image as the input image.The size of the former image is about 1/n2 of the latter,(n is the amplification factor),that is to say.the computational complexity of the method is reduced by about 1/n2.
Keywords/Search Tags:Recursive structure, residual learning, convolutional neural network, deep learning, super-resolution
PDF Full Text Request
Related items