Font Size: a A A

Research On Deep Learning Algorithms Via Structural Sparsity

Posted on:2021-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:Z P YuanFull Text:PDF
GTID:2428330614472049Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
In recent years,with the rising popularization of artificial intelligence application-s,deep learning,as one of the corest technologies,has aroused widespread concern in academia and industry.Deep learning has shown in the fields of computer vision,autonomous driving,natural language processing,and speech recognition.The fitting ability of the model is very strong,but at the same time,it often has a large amount of parameters and needs unbearable computing resource consumption.Recent researches show that the deep learning model is sparse,there have been many papers using sparse regular methods or structural sparse regular methods to solve these problems.The latter is a hot research topic because structural sparse model is more conducive to structured pruning.In this thesis,we use the sparsity of the deep learning model,combined with the vector parameters of the batch regularization layer,to establish the original L0reg-ularized model of structural sparse deep learning,and design an algorithm to solve this problem.First of all,we introduce the foundation of deep learning model,then elaborate on the challenges faced by deep learning,and analyze its sparsity.Then we introduce the popular structural sparse models currently,analyze the methods of intro-ducing sparsity to each model in detail,and give the algorithm and the corresponding algorithm framework.and review the advantages and disadvantages of each algorithm.Based on this,we establish a structural sparse model based on L0norm,discuss the important theories of the model,and propose a L0-PG algorithm to solve the problem.Finally,in the numerical experiment part,we set up a series of experiments to compare the algorithm we designed with the other existing algorithms,in the fields of prediction accuracy,calculation time and so on.It is concluded that our L0-PG algorithm reaches the highest accuracy while reducing more than 30%of the calculation time cost under various data sets in many fields.
Keywords/Search Tags:Deep learning, Neural network, Back propagation algorithm, structural sparsity, Proximal operator, PG algorithm
PDF Full Text Request
Related items