Font Size: a A A

Security And Privacy In Compressed Deep-Neural-Network-Model-Based Applications

Posted on:2021-04-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y S YanFull Text:PDF
GTID:1488306050464484Subject:Information security
Abstract/Summary:PDF Full Text Request
Deep learning(DL)has brought significant results in many kinds of domains.Recent advancements in deep neural networks(DNNs)have lead to breakthroughs in many modern artificial intelligence(AI)applications.The DNN-based models have been widely used in computer vision,speech recognition,face recognition and natural language processing etc.Due to a large number of parameters,the model requires large computation and storage volume so that they cannot be used in many applications.The model compression technology can solve the deployment problem.However,CIA attacks threaten security of the compressed DNN-based model and privacy of data during the training stage and the reference stage with respect to DL pipeline.Considering these threats,we present approaches composed of the privacy-preserving deep learning for protecting the training dataset's privacy and the model defense mechanism for enhancing model's robustness via studying the convolutional networks(CNNs).We conclude the content and main contributions of the thesis as followings:The DNN-based model can leak sensitive information about the training dataset.To solve this problem,we design a private compressed CNN-based service provision system for preserving data's privacy.In the system,the cloud server prunes a large-scale pre-trained model to a small-scale compressed model which can be deployed to the edge server.The edge server provides enhanced services for the near Io T devices.Due to the sensitive training dataset,both of the pre-trained model and the compressed model should be confidential.It includes two reasons.On the one hand,recent attacks can extract sensitive information about the training dataset according to the compressed model.On the other hand,the adversary can infer the private information about the training dataset from the pre-trained model.Because of the compressed model generated by pruning the pre-trained model,the weight distribution of the compressed model can expose that of the original pre-trained model.We introduce differential privacy mechanism to protect data's privacy.We propose an approach for building a compressed CNN-based model perturbation with differential privacy.The private compressed model is constructed in two steps such as the private pre-training step and the private compressive training step.Differential privacy is used in the two steps to guarantee the privacy-preserving dataset.We construct a set of experiments according to MNIST dataset and CIFAR-10 dataset.Experimental results show that the compressive model can uphold the trade-off between tight privacy and high utility.According to the vulnerability to adversarial examples,we focus on enhancing the effectiveness of a DNN-based model against these adversarial examples which are determined by gradients of the model.We propose a framework for generating a robust compressed CNN-based model under the adversarial attack.Due to the limited computation of the mobile device,the model is partitioned and deployed between the mobile device and the edge server.The mobile device and the edge server collaboratively train the compressed model.To enhance the robustness of the compressed model,we present a robust compressed CNN-based model generation mechanism against adversarial examples.For maintaining high test accuracy of the compressed model,we consider the weight distribution of the compressed model after model compression when adding Laplace noise to the model,followed by present a defensive mechanism based on the weight distribution of the compressed model.The compressed model can be regarded as a collaborative device-server inference for providing recognition services to the near devices.In addition,the model is also proper to deploy on the mobile device.Meanwhile,the compressed model can hold strong robustness(defensive accuracy)and high utility(test accuracy).We extensively evaluate our mechanism on MNIST dataset under the FGSM attack and the BIM attack.Compared the models with no defense method,the results show that our generated models are more effective against adversarial examples.The performance of a model depends on the size of the training dataset.Nevertheless,a party generally owns the limited training dataset.The above reasons drive multiple parties to develop the distributed deep learning for model performance improvement.However,the training method may disclose individual privacy if data is sent to a third party or shared among them.Furthermore,once the model is generated,the model can be fooled by the adversarial examples.Hence,we design a MMD-ED system.The distributed privacy-preserving deep learning mechanism for generating a robust compressed model is presented.The edge server assists multiple mobile devices to train their local and robust compressed model.The model is deployed between the mobile device and the edge server.Each mobile device first trains one part of the model and the other part of the model is learned by the edge server.To protect the individual training dataset's privacy,a distributed privacy-preserving deep learning is introduced.MPC is utilized in the deep learning.The multiple mobile devices use secret share to compute the average of the middle result before sending it to the edge server.To improve the robustness of the model against the adversarial attack,we employ the defensive mechanism on the edge server side.Furthermore,it can preserve privacy of the training dataset due to the additive Laplace noise.The edge server compresses the model and shares it to the mobile device.Finally,each mobile device obtains the local and robust compressed model.
Keywords/Search Tags:Deep Learning, Compressed Deep Neural Network, Edge Server, Model Security, Data's Privacy
PDF Full Text Request
Related items