Font Size: a A A

English Title Of Doctor Dissertation (Master Thesis)

Posted on:2021-11-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y ChenFull Text:PDF
GTID:2518306476950869Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the maturity and popularization of deep learning technology,as well as the birth of massive data and rich application scenarios,deep convolutional network,represented by convolutional neural network,began to gradually replace the traditional algorithms based on artificial feature extraction in the machine learning era.The cost of constantly approaching the precision limit is the growth of network depth and size,and the network model is becoming more and more bloated,which is a severe test for the implementation of deep learning products.In order to better deploy the model on the device side with limited computing resources without affecting its usage,the research on model compression came into being.Based on the basic algorithm and specific application scenarios,this paper systematically studied the algorithm and scheme implementation of model compression.The specific work is as follows:1.Aiming at the quantified model compression algorithm,the binary strategy and training flow of traditional binary network are studied,and the strategy and weight update are optimized on the convolutional neural network.Aiming at the defect of large loss of binary network precision,a binary combination model based on integrated learning was proposed,and the structure of the network was improved to achieve the same precision level as the original network on the cifar-10 data set.2.Based on the model compression algorithm of knowledge distillation,the basic teacherstudent model and the distillation loss function are studied,and the performance of the distillation training experiment is designed to test the algorithm.Aiming at the low correlation between distillation effect and teacher network,a self-learning knowledge distillation optimization method was proposed.The performance improvement of cifar-100 data set was similar to that of traditional distillation,but it effectively saved the model resources of teacher network.3.From the perspective of specific application scenarios,this paper chooses semantic segmentation task as the target,and uses U-Net network to build a model,and conducts training on processed binary human body analytic data set,so as to achieve the prediction effect of basic human body semantic segmentation.On this basis,a system compression scheme based on pruning and fine-tuning,sparse tensor resolution,quantization and storage of lookup table is proposed and applied in U-Net human semantic segmentation network.The optimal pruning rate is obtained through super parameter screening,and the accuracy rate is basically no loss in the Human Parsing data set after processing,and the actual model compression ratio of nearly20 is realized at the same time.
Keywords/Search Tags:neural network, weights quantitation, ensemble learning, knowledge distillation, compression system
PDF Full Text Request
Related items