Font Size: a A A

Image Super-Resolution Based On Deep Lightweight Neural Network

Posted on:2024-04-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:X R JiangFull Text:PDF
GTID:1528307340975369Subject:Multimedia Information Theory
Abstract/Summary:PDF Full Text Request
Recently,with the high-speed development of display devices and image processing techniques,people’s pursuit of sensory experience is increasingly heightened,thus demanding higher clarity in images.In order to obtain high-quality and clear images,super-resolution technology has emerged.Currently,benefiting from the excellent performance of convolutional neural networks in nonlinear mapping,deep learning-based image super-resolution algorithms have flourished and achieved satisfactory reconstruction results.However,the model size and computational cost of deep super-resolution networks have been rapidly increasing,making it challenging to deploy them on resource-constrained mobile devices such as smartphones and embedded devices.Therefore,compressing high-performance convolutional neural networks to an appropriate scale,enabling them to run on resource-limited platforms,holds significant importance for the development of image super-resolution field.To this end,this paper focuses on exploring lightweight techniques for image super-resolution,reducing model parameters and computational complexity in super-resolution models,and providing effective solutions for the design of lightweight super-resolution methods.In summary,the main contributions of this dissertation are as follows:1.A lightweight channel fusion-enhancement network for single-image super-resolution is proposed.Many existing image super-resolution methods rely on deepening or widening networks to improve network performance,which leads to high computational and memory costs,making them difficult to deploy in limited-computation devices.To address this issue,this dissertation introduces a channel-fusion module with lightweight parameters and computational costs.It first extracts features using group convolutions,reducing the computational complexity associated with normal convolution operations.Subsequently,it fuses features from adjacent groups to mitigate the information inconsistency problem incurred by grouping.A global-relation inference module is then proposed to model dependencies between different channels,further enhancing interactions between features from different groups.Additionally,a multi-scale information enhancement module is introduced to handle information from different receptive fields,enhancing the network’s feature extraction capability.With this design,the super-resolution network can efficiently extract the required features for reconstruction while reducing the computational cost of the reconstruction process,achieving excellent reconstruction performance with lower parameters and computational complexity.2.A lightweight super-resolution network based on weight pruning is proposed.Currently,most lightweight super-resolution algorithms focus on structural redundancy within the network,reducing computational costs by designing compact network architectures.However,lightweight structural design requires expert experience.Model compression technology can be combined with existing deep super-resolution networks to more flexibly obtain lightweight models of different scales.This dissertation argues that there is parameter redundancy in super-resolution networks,and removing some redundant parameters does not significantly impact network performance.Therefore,this dissertation introduces a progressive sparsity optimization algorithm that gradually prunes unimportant weights from the network,compressing the model to a specified sparsity level.The compressed model does not exhibit significant performance degradation,offering a new solution for lightweight superresolution.Furthermore,this dissertation designs a novel gate attention module to obtain more discriminative channel attention and introduces a multi-slice information module based on hierarchical residual connections to construct a lightweight super-resolution network architecture.Finally,the dissertation combines the proposed network with weight pruning to obtain a more lightweight super-resolution model.3.A binary super-resolution network without batch normalization is proposed.Binary quantization aims to quantize the weights and activations in models from floating-point values to 1-bit values,significantly reducing the memory consumption and computational cost of models.This dissertation first explores recent advanced binary methods and leverages the advantages of these algorithms to construct a binary baseline model tailored for image superresolution tasks.This effectively compresses existing super-resolution networks.Furthermore,focusing on the unique characteristics of super-resolution networks,which often do not include batch normalization layers,this dissertation explores a training mechanism for binary neural networks.By adjusting the input distribution,initial weight distribution,and activation distribution,this method helps binary super-resolution networks obtain excellent performance without batch normalization layers.Additionally,this dissertation introduces a new binary network architecture based on smaller convolutional kernels and also utilizes a full-precision network to guide the optimization of the proposed binary network,enhancing its representation capacity.4.A binary image super-resolution network based on mixed binary representation is proposed.To further enhance the efficiency of binary super-resolution networks,based on the training mechanism mentioned above,this dissertation introduces a mixed binary representation set for the activations to approximate multi-bit representations.Specifically,different quantization thresholds are employed to obtain diverse binary representations.Subsequently,these different features are combined to simulate multi-bit activation representations.Compared to conventional single-threshold quantization methods with zero as the quantization threshold,the mixed binary representation allows for the simulation of multi-bit feature representations based on activation distributions,compensating for information loss during extreme quantization processes.Furthermore,based on the mixed binary representation,this dissertation introduces precision-driven binary convolution modules to simulate multi-bit convolution processes.These modules are plug-and-play units that can easily replace fullprecision convolutions in existing super-resolution networks,facilitating the construction of high-performance binary super-resolution networks.5.A frequency-aware binary super-resolution network is proposed.In image super-resolution networks,the rich image information contained in activations(such as color and texture)is lost during the binarization process,limiting the reconstruction performance of binary superresolution networks.To address this issue,this paper introduces a frequency-aware binary super-resolution network.Specifically,this dissertation first employs discrete wavelet transform to decompose full-precision features into low-frequency and high-frequency components and then uses carefully designed binary architectures tailored for different frequency features to process them separately.To achieve this,this dissertation introduces a dynamic binarization process,where the best quantization threshold could be obtained through network optimization.In the backward propagation,the network can also choose appropriate gradient clipping intervals to seek the optimal gradient approximation.This divide-andconquer strategy enhances the network’s flexibility in learning image details,improving the reconstruction performance of the binary super-resolution networkIn summary,through the analysis of image super-resolution tasks,this dissertation proposes five lightweight super-resolution algorithms and provides effective solutions for the design of lightweight super-resolution networks.
Keywords/Search Tags:Image Super-Resolution, Deep Learning, Lightweight Neural Networks, Model Pruning, Binary Neural Networks
PDF Full Text Request
Related items