| Innovative algorithms have been proposed constantly with the development of deep learning.Moreover,although a big achievement has been made in image recognition using deep convolutional neural networks,a variety of challenges ahead should be handled before its extensive application.Popular convolutional neural network was improved against high storage space and largely-consumed computing resources in the large-scale and complex convolutional neural network from the perspectives of sophisticated model design and model pruning.Besides,model complexity,computing complexity,and error rate were measured by simulation tests.Main researches of this paper include the following aspects:(1)Two miniature convolutional neural network models designed by sophisticated model design were researched.Depthwise separable convolution method and channel shuffle method were adopted respective to design models for efficient network architectures,MobileNet and ShuffleNet.By comparing with other popular network models through simulation tests,MobileNet and ShuffleNet have higher accuracy in image recognition with lower quantities of parameters and computations.(2)Problems existing in dense connection network model were analyzed.And an improved dense connection network model that is based on group convolution has put forward,which could contribute to effectively lowering the model complexity through group convolution and the computing complexity through the growth rate that grows exponentially.At the same time,the accuracy can be guaranteed with a minor loss.(3)This paper proposes a neural network pruning strategy based on eigenvalue decomposition for accelerating the inference speed of convolutional neural network.This strategy can automatically identify and prune any unimportant convolutional kernel in the course of training,so that the quantities of parameters and computations could be lowered.Meanwhile,the accuracy of image recognition of the model could be restored via fine-tuning operations.More importantly,the strategy also reduces the size of the model,runtime memory and computing operations,and introduces minimum expenses for the training without need for efficient reasoning through special software/hardware. |