Font Size: a A A

Research On Privacy-Preserving Deep Learning Scheme

Posted on:2020-05-31Degree:MasterType:Thesis
Country:ChinaCandidate:Q J HeFull Text:PDF
GTID:2428330575966293Subject:Information security
Abstract/Summary:PDF Full Text Request
In recent years,deep learning has achieved great success in more and more fields.The datasets with high-quality and large-scale are the key to improving the ability of deep learning models.The accuracy of deep learning can be improved by training over multi-participants' pooled dataset.However,this collaborative learning invades the privacy of users' sensitive data.To solve this contradiction,this paper proposes TransNet,a new solution for privacy-preserving collaborative neural network,named as TransNet for Transformed Layer based Deep Neural Network,which allows multiple participants to train a neural net-work collaboratively over their pooled dataset with a training server,while keeping their sensitive input data from revealing.The security model of the scheme is semi-honest,i.e.,the server and all the participants will honestly follow the prescriptive steps,but try to learn others' privacy data as much as possible.The scheme builds such a privacy-preserving neural network through adding a transformed layer to the neural network.The transformed layer is obtained by irreversible transformation defined by linear inde-terminate system,whose property of infinite solutions protects the privacy.And the neu-ral network can train over the pooled dataset of transformed layer.Why this can work is also analyzed through Lipschitz continuity.TransNet contains three sub schemes named as TransNet-V,TransNet-H and TransNet-A to support the vertically,horizontally and arbitrarily partitioned dataset,respectively.TransNet has the advantage of lower computation and communication complex-ity than previous secure multi-party computation based and homomorphic encryption based schemes,and has the superiority of being immune to the number of participants and supporting arbitrarily partitioned dataset compared to previous differential privacy based and stochastic gradient descent based schemes,which support horizontally parti-tioned dataset only.TransNet has no special security requirements to the training server and works independently of the neural network structure including Multi-Layer Percep-tion(MLP),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),etc.Therefore it is practical and efficient to be deployed.Comprehensive experiments are performed over LETTER,MNIST,FASHION and SVHN datasets using different kinds of neural network models.Experimental results demonstrate the effectiveness and advantages of TransNet.It shows TransNet trains as quickly as the original neural network trains.With proper variables,TransNet gets close accuracy than the baseline which trains over pooled original dataset.It achieves privacy-preservation at the cost of sacrificing only a little accuracy.
Keywords/Search Tags:Privacy-preserving, Deep Learning, Irreversible Transformation, Indeterminate System
PDF Full Text Request
Related items