| The main purpose of the image super-resolution is to reconstruct a low-resolution image into a high-resolution image with rich detailed textures and high-quality visual senses,which is of great significance in the fields of medical images and video surveillance.In recent years,with the great progress of deep learning in computer vision and other fields,image super-resolution methods based on deep convolutional neural networks have become a hot issue.However,to achieve a better reconstruction effect,the existing image super-resolution models continuously increase the depth and complexity of the network,resulting in excessive network model parameters and computation,making it difficult to apply to real-world scenarios.The main research focus of this paper is to reduce the complexity of the network,the number of model parameters and the amount of computation on the premise of fully extracting and using image feature information to reconstruct high-resolution images with rich texture details.To this end,this paper mainly proposes three solutions:(1)Most of the previous image super-resolution methods only choose to perform feature extraction on the down-sampled images,and perform up-sampling processing at the end of the network.Considering that images of different scales and sizes contain different feature information,this paper proposes an effective lightweight image super-resolution with multi-scale feature interaction network.The model takes the lightweight recurrent residual channel attention module as the basic unit,and builds a three-layer deep feature extraction and fusion network.Each layer extracts image features of different scales and fuses feature information from other layers of different scales.Experiments on four benchmark datasets,Set5,Set14,BSDS100 and Urban100,fully verify the effectiveness of the proposed method.(2)Based on the Transformer,the global dependencies in the image can be modeled,and at the same time,it has strong feature representation ability and helps to restore the texture details of the image.This paper proposes a lightweight bimodal network for single-image super-resolution via symmetric CNN and recursive Transformer.A lightweight bimodal network for exploring ways to combine CNNs and Transformers to improve lightweight image super-resolution performance.The model utilizes a CNN-based feature-fine dual attention module to construct a sub-symmetric CNN network for local feature extraction and coarse image reconstruction.Then,the detailed features of the image are further modified by the recursive Transformer module to extract the global dependency information of the image.Experiments on five benchmark datasets,Set5,Set14,BSDS100,Urban100 and Manga109,fully verify the effectiveness of the method proposed in this paper.(3)Considering that the current image super-resolution methods based on convolutional neural network will extract many repeated features with the increase of network width and depth,these repeated features will increase the computational resource consumption of the network.This paper proposes a feature de-redundancy and self-calibration network for lightweight image super resolution.The model takes the feature de-redundancy self-calibration module as the basic construction unit,which is used to extract and fuse the features extracted by the feature de-redundancy and self-calibration modules to reduce repetitive feature information.Experiments on five benchmark datasets,Set5,Set14,BSDS100,Urban100 and Manga109,fully verify the effectiveness of the method proposed in this paper. |