Font Size: a A A

Differential Privacy Preservation In Deep Learning

Posted on:2021-01-20Degree:MasterType:Thesis
Country:ChinaCandidate:J W ZhaoFull Text:PDF
GTID:2428330614963960Subject:Information security
Abstract/Summary:PDF Full Text Request
In the era of big data,data has become the cornerstone of scientific research.New technologies based on deep learning,such as recommendation systems,speech and image recognition,and driverless technology are developing rapidly.Data plays a key role in driving algorithms to continuously optimize.Relying on the capabilities of multi-level abstraction,deep learning performs well in prediction accuracy.But large amounts of training data usually contain confidential information,which often causes privacy leakage.With strong mathematical guarantee that is independent of background knowledge,differential privacy has developed from the traditional data release to the state-of-art training data privacy protection in deep learning.The combination of differential privacy and deep learning usually need to keep balance between accuracy and privacy.The key issues that availability loss and the weakness of non-semantic guarantee make differential privacy a lot of difficulties.How to reduce the accumulation of noise caused by iterations,and make evaluation of the overall effectiveness is to be solved urgently.Firstly,As the basis of research,the mainstream differential privacy protection schemes are divided into three levels which are input,hidden and output layer in deep learning model.Besides,restricted to privacy guarantee on theory and to reduce the accuracy loss,an adaptive noise adding method on gradient and a multiple noise adding method based on relevance of inputs and outputs are studied separately.Among them,based on gradient decent training,the optimization is about gradient clipping by layers.The privacy account work is supported by Moments Accountant mechanism and the security guarantee is proved with a kind of privacy attack.And the limitation of training iteration is analyzed.Additionally,the second method is a multiple different privacy mechanism which is independent of training epochs.The calculation of relevance between inputs and outputs helps add less noise to the more important features,resulting more brilliant accuracy.The effect of differential privacy has shown in many applications,and it has become a practical standard for privacy protection.The study of its application in deep learning will provide privacy for more relative technologies in the future.This thesis provides several experiments,which not only theoretically proves the utility of differential privacy,but also visually.And to a certain extent,these differentially private deep learning models can keep a balance between privacy and usability.
Keywords/Search Tags:differential privacy, deep learning, privacy protection, gradient clipping, privacy budget
PDF Full Text Request
Related items