| Artificial intelligence is an important direction of human development in the future,and deep learning is one of the most famous algorithms about artificial intelligence.Deep learning not only promotes the development of computer vision,natural language processing,recommendation algorithm,and other fields,but also assists people in the research of biology,medicine,and other fields.As one of the most popular algorithms at present,model of deep learning is inefficient.This leads to the deep learning model often needs to be deployed on the expensive computational-cost platform rather than the low computational-cost platform such as embedded devices or mobile phones.This limits the wide application of deep learning.How to improve the efficiency of the neural network is a meaningful problem.In the research of accelerating the speed of deep learning,some researchers optimize the model structure to reduce unnecessary floating-point computation,to achieve the purpose of accelerating model reasoning.This kind of method is called efficient neural network structure design,which mainly includes three sub-methods: model pruning,efficient architecture,and neural architecture search.The proposed method in this thesis belongs to the efficiency architecture.This thesis proposed a novel efficient convolution,group sharing convolution.In the group sharing convolution,input features are grouped and the features in the group share the same convolution kernel.This can reduce the floating-point computation of convolution.However,when all efficiency convolutions improve efficiency,there must be defects,and group sharing convolution,as one of them,will inevitably fall into the dilemma of information convergence within the group.In order to overcome this defect,this thesis combines different efficient convolutions to make up for each other,and finally builds a new model cell,and then constructs a new efficient convolutional neural network architecture based on this model cell,Shared Net.On the famous large dataset,Image Net,Shared Net achieved the best result(73.06%)in the manually-design efficient convolutional neural network.However,Shared Net has some shortcomings in efficiency and performance.On the one hand,the model cell fails to refine the function of each convolution.On the other hand,the model architecture also lags behind the mainstream efficient convolutional neural network architecture.Therefore,in terms of model cell,this thesis mainly refers to Mobile Net V3 cell and Shuffle Net cell to design positive shared residual bottleneck and reverse shared residual bottleneck;In terms of model architecture,this thesis designs a novel efficient convolutional neural network based on Mobile Net V3 architecture,Shared Netv2.Finally,Shared Net V2 surpasses Google’s Mobile Net V3 and Huawei Noah Ark Lab’s Ghost Net in Image Net dataset and achieves 76.03% accuracy. |