Font Size: a A A

Research On Privacy-preserving In Multi-party Deep Learning Framework

Posted on:2021-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:J L FengFull Text:PDF
GTID:2518306050473384Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In recent years,deep learning has achieved impressive success in many fields,and its performance has approached or even exceeded human in a wide range of applications.A prerequisite for good performance in deep learning is the massive data available for model training.The conventional method is to collect data from various sources and use it for model training after reaching a certain scale.However,massive data collection from multiple sources may present privacy issues.Once data is collected centrally,it may be stored permanently,and used for the purpose which the original data owner is unaware of.Generally,private or sensitive data are scattered across multiple users,research institutions or companies,and they are unwilling or unable to share data with each other.If researchers want to train a deep learning model,they can only perform on their own data.Nevertheless,the data owned by an institution may be very limited,which will result in an over-fitting deep learning model,leading to poor performance.Therefore,there raises an urgent demand for multi-party deep learning to solve such problems where data is scattered but cannot be shared with each other.In multi-party deep learning,multiple participants jointly train a deep learning model through a central server to achieve common objectives without sharing their private data.Although this paradigm makes it impossible for an attacker to access the data directly,private data can still be indirectly revealed through certain technologies.Recently,a significant amount of progress has been made toward the privacy issue of this emerging multi-party deep learning paradigm.But there are still some problems in the existing work.The first problem is that most of the existing works are incapable of defending simultaneously against the attacks of honest-but-curious participants and an honest-but-curious server without a manager trusted by all participants.Another problem is that existing frameworks consume high total privacy budget when applying differential privacy for preserving privacy,which leads to a high risk of privacy leakage.In this paper,we mainly solves these two problems in the multi-party deep learning scenario.In order to tackle the problem that the existing frameworks cannot simultaneously defend against the attacks by honest but curious participants and honest but curious servers,we design a novel multi-party deep learning framework,which integrates differential privacy and homomorphic encryption to prevent potential privacy leakage to other participants and a central server without requiring a manager that all participants trust.In order to alleviate the problem of the high consumption of privacy budgets,we propose three strategies for dynamically allocating privacy budget at each epoch to further enhance privacy guarantees without compromising the model utility.Moreover,it provides participants with an intuitive handle to strike a balance between the privacy level and the training efficiency by choosing different strategies.Both analytical and experimental evaluations demonstrate the promising performance of our proposed framework and our strategies for dynamically allocating privacy budget.
Keywords/Search Tags:privacy, multi-party deep learning, differential privacy, homomorphic encryption, privacy budget
PDF Full Text Request
Related items