In recent years,with the development of artificial intelligence,the deep neural network has achieved amazing results in the field of image and other fields with its excellent feature extraction ability,and even achieved far more than traditional machine learning methods in some fields.The complex network structure and a large amount of training data provide a strong support for the neural network to embark on the path of remarkable development.However,the huge amount of parameters and huge amount of calculation of the network model also limit the deployment and development of deep neural networks on mobile terminals with limited resources.But in fact,the researchers found that a large part of the huge parameters of the deep neural network are redundant.In order to serve the mobile terminal,the idea of compressing and accelerating the network while maintaining network performance came into being.However,there are still many deficiencies in the current compression method.Based on this background,this paper focuses on the research of the model compression and acceleration algorithm of deep convolutional neural network.The main work is as follows:For the compression method based on knowledge distillation,this paper proposes a method of knowledge transfer based on feature maps by analogizing the process of teachers imparting knowledge to students in real teaching scenarios,allowing the student model to learn the middle layer knowledge from the teacher model,imitating the learning idea.Through comparative experiments,it is proved that the student model based on the knowledge distillation method based on feature maps can obtain better performance than the existing methods.However,the above knowledge distillation method is mostly the same as the existing knowledge distillation method,and the ability of the teacher model is transferred to the student model through a knowledge transfer process once.It is not considered that when the scale difference between the teacher model and the student model is large,the student The model can’t imitate the problem of teacher model ability well.In this paper,by introducing an auxiliary model,a model with a scale between the teacher model and the student model is used as an auxiliary,and the distillation model is divided into two distillation processes to bridge the scale gap between the teacher model and the student model.Experimental comparison proves that this method can obtain better results than existing methods.Finally,on the basis of the combination of the above methods,an embedded traffic sign recognition system based on feature map multi-level distillation model compression algorithm is designed and implemented.The performance test of the model on the embedded platform,the experimental results prove that this work can reduce the model inference time of the model in the embedded device,and achieved good results when performing the traffic sign recognition task. |