| Image quality may be degraded during the process of image generation,transmission,and storage,leading to information loss and affecting the execution of different visual tasks.Image super-resolution technology aims to restore a high-resolution image from its low-resolution counterpart,which is of great significance in the fields of medical imaging,remote sensing imaging,and security monitoring.Deep convolutional neural network(CNN)is popular for image super-resolution reconstruction in recent years,due to its powerful presentation ability.However,most of the current CNNs require huge computational complexities and storage resources,making it difficult to deploy them on mobile devices,low-computing power,or lowstorage intelligent terminals.Therefore,to address this issue,two lightweight networks are proposed,which are summarized as follows:(1)A lightweight pyramid-pooling attention mechanism network is proposed to effectively improve network performance while maintaining network parameters and computational complexity similar to current state-of-the-art(SOTA)lightweight algorithms.Firstly,information distillation block is used as the basic building block for feature extraction.Then,pyramid pooling module is introduced to further extract multi-scale information from the features extracted by all information distillation blocks,enriching image details and structural information.Finally,backward attention fusion module is adopted to discriminate the importance of features at various levels through an enhanced spatial attention mechanism and performs reverse fusion of adjacent level features to effectively integrate key features of global information,further improving the performance of the network model.(2)To achieve lightweight multi-scale learning networks,a lightweight and efficient weighted multi-scale information fusion network is proposed.Firstly,by combining the advantages of multi-scale learning,adaptive weighted residual learning,and attention mechanism,the weighted multi-scale residual attention block is constructed to efficiently extract multi-scale information from images and enrich feature information.Then,the information fusion unit is introduced to locally and globally fuse the features extracted by the weighted multi-scale residual attention block,fully utilizing the mid-to-high frequency information in the network and further improving network performance.Extensive experiments on four benchmark test sets are conducted to evaluate the proposed models,which are compared quantitatively and qualitatively with SOTA lightweight methods.Two commonly used evaluation metrics,peak signal-to-noise ratio(PSNR)and structural similarity(SSIM)as well as qualitative visual comparison are employed.The results show that the proposed models have better reconstruction performance than the competitors.Additionally,parameter and ablation experiments are conducted to verify the rationality and effectiveness of the main components in the proposed models. |