Font Size: a A A

Research Of Lightweight Convolutional Neural Network

Posted on:2020-10-23Degree:MasterType:Thesis
Country:ChinaCandidate:C X ChenFull Text:PDF
GTID:2428330590960693Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Compared with the traditional algorithms,convolutional neural network have achieved better results in computer vision applications.However,convolutional neural network algorithms is highly dependent on hardware such as high-performance servers and GPUs.It's hard to port convolutional neural network algorithms to resource limited portable devices,because of the number of parameters and computational complexity.In order to make the convolutional neural network algorithm faster and smaller,the following work is carried out.It's found that using 1×1 filters before large size filters to reduce the spatial size of the feature maps and using group convolution are the most common ways to reduce computational complexity and parameters for many large network,based on the analysis of current research work of convolutional network.Large size filters are rarely used in current research,the combination of 1×1 filters and 3×3 filters becomes most common in the convolutional layer.Since the computational complexity of 1×1 convolutional layer is still very high,in order to further reduce the parameters and computations,this paper propose an inter-group feature mergence algorithm that can improve the basic module of common convolutional layers.The experimental results on DenseNet and ResNet show that this algorithm can reduce number of12)?2?is the number of groupings)filters of 1×1 convolutional layer,without affecting the number of output feature channels and network's learning capability.In addition,this paper propose a pruning algorithm based on weights of feature channel for DenseNet,a kind of densely concatenated convolutional neural network.A feature weight adjustment layer is added to each feature channel of the network architecture to automatically learn the weight of each feature channel during training stage.In the pruning stage,the entire convolutional filters with low weight will be pruned,the filters with high weight will be left.The experiment results show that this pruning way can prune 25%of the number of feature channels,and it does not affect the test accuracy of algorithm model.The amount of parameter of DenseNet model using the two ways this paper proposed to lightweight is only 0.4M,the computation is 55MFLOPs.The experiments result of DenseNet on the Cifar10 dataset show that test accuracy loss is less than1%compared with the original model,the parameters and computation are smaller than the other lightweight convolutional neural network such as ShuffleNet,MobileNet,and the classification accuracy is higher.
Keywords/Search Tags:Deep Learning, Convolutional Neural Network, Light weighting, Network Pruning
PDF Full Text Request
Related items