Font Size: a A A

Attention In Attention Networks For Single Image Super-Resolution

Posted on:2021-01-05Degree:MasterType:Thesis
Country:ChinaCandidate:Y M XiaoFull Text:PDF
GTID:2428330626958910Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Attention in Attention Networks for Single Image Super-ResolutionDigital image is an important objective information carrier in the information age.Image resolution represents the amount of information carried in the image.The higher the resolution,the more detailed information the image provides.However,in the actual imaging process,it is often affected by degradation factors such as imaging equipment,sensing technology,and lossy transmission,which leads to the loss of a lot of key information,so that the resulting image cannot meet the application requirements.Therefore,super-resolution reconstruction technology is proposed to improve spatial resolution of low-quality image through related algorithm in the later stage,which has great research significance in many fields,such as medicine,remote sensing,monitoring and so on.So far,great progress has been made in the field of super-resolution reconstruction at home and abroad.This paper focuses on single image super-resolution reconstruction algorithm based on convolutional neural network.This kind of algorithm mainly uses a large number of high-definition images to build a sample database,uses representation ability of convolutional neural network to learn the spatial mapping relationship between low-resolution images and high-resolution images,and then uses the relationship as prior knowledge to reconstruct super-resolution images.At present,most of the algorithms deepen the network to improve performance of the model by continuously adding a single residual module.However,too deep network brings high frequency information loss,high training and operation costs.In addition,single network structure does not make full use of convolutional neural network to characterize low-resolution images from shallow to deep,resulting in insufficient extraction of original feature information and lack of real details in reconstructed images.Based on the above problems,this paper proposes a single-image super-resolution reconstruction algorithm based on attention in attention networks(AIASR).AIASR is mainly composed of the following three mechanisms:(1)Deep attention mechanism.In the feature extraction part,this mechanism is used to partition the network layer in depth,and set different numbers of convolution kernels for each area in turn in the manner of an inverted pyramid structure,so that shallow layers near original low-resolution image have more opportunities for feature extraction and can obtain richer basic local features to provide more choices for subsequent layers.(2)Window attention mechanism.This mechanism is introduced in residual module.It mainly uses multiple windows of different sizes to further explore different high-level features of the feature map under different receptive fields,and then calculates channel weight of output feature map of each window through channel attention mechanism.After that,all outputs will be fused and filtered across windows.At this time,it will be easier to retain high-value key features and enhance network's ability to discern information.AIASR uses the window attention module as the basic feature extraction component,and deepens the network by repeatedly superimposing this module.In addition,in the super-resolution reconstruction part,a window attention module is used to organize the low-resolution feature space which will be sampled.(3)Global feature fusion mechanism.At the end of the feature extraction section,this mechanism is used to fully reuse key features at all levels output by all previous window attention modules to modify the low-resolution feature space,so as to complement the high-frequency information lost in the deep network transmission process.This paper trains AIASR on the high-definition dataset DIV2K,and evaluates it on the commonly used benchmark datasets Set5,Setl4,BSD100,Urban100,and Manga109.Experiments show that internal mechanisms of AIASR complement each other and can effectively improve model performance.In addition,AIASR surpasses most popular models on the common evaluation indicators PSNR and SSIM,and the super-resolution images generated by reconstruction are visually more real and clearer than the results of other models.
Keywords/Search Tags:single-image super-resolution reconstruction, attention in attention, deep attention, window attention, convolutional neural network
PDF Full Text Request
Related items