| Multi-object tracking is a key basic research direction in computer vision.It has important application value in the fields of intelligent monitoring,autonomous driving,and smart cities.To pursue higher tracking accuracy,the current multi-object tracking algorithm based on deep learning often has high calculations and parameters,which brings great difficulties to the application of algorithms.Knowledge distillation is a common method of model lightweight.Its core idea is to transfer the knowledge of the powerful but difficult-to-implement teacher model to the easy-to-deploy lightweight student model.It has a wide range of applications in image classification,object detection,and other fields.However,there are few researches on knowledge distillation methods in the field of multi-object tracking at present.The main reason is that the single-task-oriented knowledge distillation method is directly transferred to the multi-object tracking method of multi-task joint learning,and the distillation effect is poor due to the lack of in-depth analysis of the essential differences and links between the two fields.Therefore,this thesis explores the reasons why the existing knowledge distillation methods do not perform well in the multi-object tracking model.From the perspective of feature-based and resultsbased knowledge distillation,a lightweight method of multi-object tracking model based on knowledge distillation is proposed.The main work of this thesis is as follows:(1)From the aspect of feature-based knowledge distillation,this thesis designs a knowledge distillation method for the ID feature and the middle layer feature in the multiobject tracking model to improve the feature extraction ability of the lightweight student model.For ID features,an ID feature knowledge distillation method based on environmental response consistency is proposed.This method effectively models the response of the target and the environment,and improves the representation ability of the student model to the target through loss function design.Then,a middle-level feature knowledge distillation method based on key feature learning is proposed to make the student model focus more on the learning of the beneficial features of the teacher model and improve the knowledge distillation effect under the imbalance between foreground and background.(2)To further exploit the tracking performance advantages of the teacher model,the thesis proposes a response-based knowledge distillation method based on the prediction difference.This method analyzes the distillation sample selection strategy,and comprehensively considers the sample prediction difference between the student model and the teacher model from multiple dimensions.Through the design of the loss function,the student model dynamically adjusts the distillation loss weight of the sample according to the prediction difference during training.Thereby improving the detection performance of the lightweight model.(3)From the direction of feature-based and response-based knowledge distillation,this thesis proposes a lightweight method of knowledge distillation for multi-object tracking models.This method has been widely tested on a variety of lightweight tracking models,multi-object tracking algorithms,and multiple datasets,and has proved its effectiveness and generalization,which is better than the existing knowledge distillation methods.Taking the CSTrack multi-object tracking algorithm as an example,the IDF1 of the YOLOv5-S lightweight student model trained by this method is increased from 61.6to 65.5,the MOTA is increased from 64.3 to 66.8,the MT is increased from 27.6 to 33.5,and the tracking speed is up to 33.2 FPS,which has high application value. |