Font Size: a A A

Research On Some Super-Resolution Reconstruction Algorithms Based On Residual Learning Network

Posted on:2021-09-08Degree:MasterType:Thesis
Country:ChinaCandidate:B J ChenFull Text:PDF
GTID:2518306308484924Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
This thesis mainly studies the super-resolution(SR)reconstruction algorithms with residual learning network,and makes corresponding improvements to the basic convolutional neural networks combined with pyramid bottleneck residual unit,Res2Net module and SK module respectively.The specific work is summarized as follows:1.This chapter presents an image SR reconstruction algorithm based on deep recursive convolutional neural network,aimed at the residual learning and cascaded structure applied to SR reconstruction,so as to alleviate the difficulty of SR image reconstruction directly.The new network structure combines the deep recursive convolutional network with the pyramid bottleneck residual unit,and the pyramid bottleneck residual unit is added with feature division and feature fusion operation,so as to combine the shallow features extracted from the first three convolutional layers with the deep features extracted from the first six convolutional layers and make full use of the context information.Concretely,first,two convolution layers are used to extract initial features from the original low-resolution(LR)image.Then,a cascade of fine extraction blocks to extract more useful features step by step and remove redundant features.Finally,a deconvolution operation is used to recover features.The whole network uses residual learning to speed up the training of the network.The experiment shows that the proposed method has better reconstruction effect than the deep recursive convolutional network.2.Considering that the number of network layers designed in the previous chapter is small,the reconstruction effect is not very ideal,so an image SR reconstruction algorithm based on dense-connected bottleneck residual network is proposed in this section,aiming to extract more information by deepening the number of network layers and increasing the receptive field,and solve the problem of gradient disappearance by introducing skip connections into the network structure.The new network structure combines the deep recursive residual network with the pyramid bottleneck residual unit,and feature division and feature fusion operation are adopted for different layers of the pyramid bottleneck residual unit,and the reconstruction effect after adding feature division and fusion operation is compared by training and testing,so as to select the best network structure.Specifically,first of all,two convolution layer is utilized to extract the features of the low level,second,the residuals of the seven densely connected unit are used to obtain high level features,dense connected structures strengthen the heavy use of information and reduce the computational complexity,in the end,a deconvolution operation is used to restore features.Experiments show that the proposed method has better reconstruction effect than the deep recursive residual network.3.The first two chapters of the design of network has complex module,which greatly increased the network training time,at the same time also increased to the equipment to the requirements.In this section an SR algorithm based on dense-connected improved residual units network is proposed,aimed at based on the basis of residual modules,design a simple network of 20 layer effects to achieve a single image SR reconstruction.The new structure combines global residual learning,local residual learning and skip connection,which greatly increases the reuse of features.In particular,the residual unit is employed to obtain residual features,the convolution layer of 1×1 is used to filtrate useful features,and the deconvolution is used to recover features.Considering that Res2Net module can not only construct hierarchical residue-like connections in a residue-like block,but also express multi-scale features with granularity level,thus increasing the sensing field of each network layer.SEnet obtains the importance of each feature dimension through learning,extracts the useful features in the feature map according to the importance parameter,and removes redundant features.Therefore,Res2Net and SEnet are added to the proposed model in this chapter,and compared the added model with our model.The experimental results show that the proposed algorithm has better reconstruction effect than existing shallow convolutional networks.
Keywords/Search Tags:Residual learning, Super resolution reconstruction, Convolutional neural network, Pyramid bottleneck residual element, Res2Net module, SEnet module
PDF Full Text Request
Related items