Font Size: a A A

Multi-scale Feature Fusion Network For Image Super-resolution Reconstruction

Posted on:2019-11-13Degree:MasterType:Thesis
Country:ChinaCandidate:X X FanFull Text:PDF
GTID:2428330572952226Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In the practical application scene,the acquisition of the images is limited by the imaging system and the imaging environment,making it difficult to obtain an ideal high resolution(HR,High Resolution)image or image sequence.Image super-resolution reconstruction technology has attracted much attention due to its low cost and powerful ability to effectively reconstruct low resolution images into high-quality high resolution images.Image super-resolution reconstruction has been widely applied in many fields,such as artificial intelligence,video surveillance,remote sensing imaging and other fields.In recent years,deep learning has made major breakthroughs in the field of image super-resolution reconstruction.Most image learning methods based on deep learning learn an end-to-end network to fit the mapping relationship between high resolution images and low resolution(LR)images so as to be visually satisfactory.In this thesis,we deeply study the image super-resolution reconstruction method based on deep learning,and propose two image super-resolution reconstruction methods based on multi-scale feature fusion.This thesis proposes a structured sparse multi-scale feature fusion network for image superposition,aiming at the problem that the existing methods only learn the mapping relationship by extracting the single-scale features of the image,which leads to the loss of some key information needed for reconstructing the image.The core component of this network is the multi-scale feature fusion module,which is used to extract features of image in different scales,which is beneficial to obtain more complete structure and context information of the image.The network employs a cascade structure of multiple multi-scale feature fusion modules to more accurately fit the mapping relationship between high and low resolution images.This multi-scale feature fusion network can achieve higher quality reconstructed image.However,because of its complicated network structure and large scale of parameters,the training process of network becomes difficult.While increasing the calculation consumption,the speed of network reconstruction is reduced.In order to solve this problem,a network compression strategy is adopted to structured sparse the parameters in the multi-scale feature fusion network.In this way,the redundant parameters in the network are removed and a more compact network is obtained.Moreover,the speed of the network reconstruction is improved.Due to the effectiveness of sparse representation in image super-resolution tasks,the performance of structured sparse multi-scale feature fusion networks has been further improved.Experimental results show that the performance of this algorithm is higher than many current advanced image super-resolution reconstruction algorithms.Because the above-mentioned multi-scale feature fusion network is more complex,this thesis proposes a single-core multi-scale feature fusion network with a simple structure.Using the principle that the images output from different convolutional layers have different scales,the features of different convolutional layers are fused to form the multi-scale features.In this way,the underlying information in multi-scale features is fully exploited and the quality of image reconstruction can be improved.This method effectively reduces the complexity of the network.Experimental results show that the proposed method greatly improves the speed of image reconstruction on the premise of ensuring the quality of images reconstruction.
Keywords/Search Tags:Deep neural network, Multi-scale feature fusion, Network compression, Structured sparsity, Super-resolution
PDF Full Text Request
Related items