Font Size: a A A

Design And Implementation Of Exploratory Multidimensional Analysis And Visualization System For Big Data

Posted on:2021-01-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y F LiuFull Text:PDF
GTID:2428330632962926Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Convolutional neural networks(CNNs) have shown impressive performance in many related fields.However,the deeper the structure of the deep neural network,the larger the amount of parameters and calculations of a deep neural network,the higher the requirements for computing power,which greatly hinders the application of neural networks on the mobile end.Therefore,a variety of model compression algorithms have appeared,including low-rank decomposition,knowledge distillation,and network pruning.However,there are still many shortcomings and problems with the compression algorithms mentioned above:(1)Convolutional neural networks are multi-layered structures.The differences of parameter redundancy at each layer have not been taken into account in the low-rank decomposition,therefore a reliable method is urgently needed to assign a suitable low-rank to each layer;(2)The network deeply suffers from the vanishing gradients when knowledge distillation is used to train the deep network.Vanishing gradients will cause the parameters of shallow layers to not be learned very well.(3)In the process of network pruning,an automatic and efficient pruning ratio allocation strategy for non-experts in the field of compression is urgently needed.In view of the above problems,this paper focuses on the low-rank decomposition,knowledge distillation,and network pruning.Then,three compression algorithms are flexibly applied to face recognition models.The main research contents of this paper are as follows:(1)A low-rank decomposition compression algorithm based on low-rank automatic allocation is proposed,which solves the problem of incomplete parameter compression caused by the difference of parameter redundancy at each layer in the current work.we achieve 48 × compression rate without loss of accuracy on Birds-200.(2)A three-network knowledge distillation technology based on local and global knowledge fusion is proposed to solve the problem of accuracy degradation caused by low-rank decomposition of convolutional neural networks.On the public dataset Birds-200 and ImageNet2012,we have improved the accuracy by 1.23%to 3.27%.(3)Two friendly network pruning strategies for non-professional are proposed to solve the problem that the compression ratio in network pruning is difficult to determine,and the two strategies are compared in detail on CIFAR-10.We also analyzed their advantages and disadvantages respectively.Based on the above research content,this paper designs and implements a face recognition web system based on the compression model.The above three complementary compression algorithms are flexibly applied to the face recognition model to verify the effectiveness of the proposed algorithm.A neural network model with few parameters and few calculations can greatly save resources.Neural networks will create unpredictable value if they can be deployed on mobile with a small loss of performance.
Keywords/Search Tags:model compression, low-rank decomposition, knowledge distillation, network pruning, face recognition
PDF Full Text Request
Related items