Font Size: a A A

Research On Methods Of Privacy-preserving Deep Learning Based On Homomorphic Encryption And Neural Networks

Posted on:2021-05-13Degree:MasterType:Thesis
Country:ChinaCandidate:T Y XieFull Text:PDF
GTID:2428330611464278Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Driven by big data,deep learning has shown excellent results in various fields,including health care,financial investment,commodity trading,mobile authentication,etc.Successful cases of deep learning all use a large amount of valuable data,and these data more or less include personal privacy,such as users' interests,family conditions,life habits,etc.Once sensitive information in the deep learning leaks,it will cause unpredictable issues of property and life safety.Therefore,privacy preserving and security in deep learning is an important research topic.The purpose of privacy-preserving deep learning is to ensure the security of data use and the efficiency of model application by using privacy preserving strategies(such as encryption algorithm,differential privacy,noise addition,etc.)to carry out special processing on data and models in deep learning.Aiming at the privacy preserving of neural networks,this thesis proposes three kinds of privacy-preserving deep learning schemes from two aspects: encryption algorithm and model robustness,and the main research work includes as follows:(1)This thesis has proposed an Efficient Integer Vector Homomorphic Encryption(EIVHE)Scheme.From the encryption algorithm,the scheme has used homomorphic encryption algorithm to encrypt the data sets then train and test model,and the core of the encryption algorithm is the use of key switch technology and function polynomialization to achieve fully homomorphic encryption,on the premise of not adding excessive noise,implementing the encryption of non-polynomial functions and the algebraic operations of ciphertext.The experiments have been tested on MNIST data set,and the accuracy is constantly improved through repeated experiments,and compared the parameters such as training set accuracy,test set accuracy,time cost and period,etc.In order to make the maximum improvement of accuracy,taking the absolute value of data has been proposed.The result has shown that homomorphic encryption can preserve privacy inside the neural networks.(2)This thesis has proposed a gradient-based algorithm to deceive deep neural networks,which called DeceiveDeep.From the model robustness,in order not to cause misleading of human vision and to ensure the rationality of using,taking consider from the computer vision to deceive the deep neural networks and verify the robustness of the model.Based on the DeepFool algorithm,this algorithm has changed the gradient descent algorithm to the gradient ascend algorithm.And this algorithm has introduced Euclidean norm to update the feature vector,which can update feature vector corresponding to the origin data to the feature vector corresponding to other classification data.In the experiment,L-BGFS,FGSM and DeepFool algorithms are compared with DeceiveDeep algorithm,and these algorithms are applied to deep neural networks and convolutional neural networks respectively for experiment.MNIST and fashion-MNIST data sets are used to make testing.The results show that DeceiveDeep algorithm can reduce the accuracy of neural network,verify and improve the robustness of the model.(3)This thesis has proposed a gaussian noise algorithm based on disturbance directionality,which called Gaussian Noise DeepFool(GNDF).From the model robustness again,on the basis of the DeceiveDeep algorithm,GNDF algorithm carries out a deeper mining and research to further improve the performance of deceive deep network and strengthen the privacy security inside the deep neural networks.GNDF algorithm assumes that the perturbation has directionality,and the proof in the case of binary and multi-classification tasks is given.Based on the directionality of the disturbance,the DeepFool algorithm is improved and gaussian noise is added into DeepFool to the update strategy.Experimental evaluation is carried out by applying GNDF algorithm to MNIST,fashion-MNIST and ISLVRC 2012 data sets.The results show that GNDF algorithm is a little better than the original algorithm and can increase the robustness of deep neural networks.
Keywords/Search Tags:deep learning, homomorphic encryption, gaussian noise, adversarial training, privacy preserving
PDF Full Text Request
Related items