Font Size: a A A

Research On Privacy Protection In Distributed Deep Learning Based On Differential Privacy

Posted on:2021-08-09Degree:MasterType:Thesis
Country:ChinaCandidate:D N YuanFull Text:PDF
GTID:2518306047485604Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Deep learning has been widely used in computer vision and natural language processing due to its powerful capability in data analysis.However,a deep learning model with good per-formance relies on the amount of training data,which means some fields limited by data collection,such as hospitals and banks,couldn't take advantage of deep learning.A dis-tributed deep learning system consists of users and a parameter server.Each user trains a deep learning model through the local dataset and improves the performance of the model by sharing parameters.Although there is no need to collect training data in a distributed deep learning system,it is possible to leak privacy via sharing parameters.Besides,data are usually transferred by wireless networks,which mean attackers have opportunities to thief data or interrupt services.In this paper,we focus on privacy protection methods in distributed deep learning system.To avoid privacy leakage in distributed deep learning system,we propose two schemes which are applying differential privacy to sharing parameters and original data,respectively.More-over,we design an intrusion detection system based on generative adversarial networks to detect malicious attacks in wireless networks.The main contributions of this paper are the followings:1.To prevent privacy leakage caused by sharing weights in distributed deep learning sys-tem,we apply differential privacy to sharing weights.In our scheme,users should add noise satisfied the definition of differential privacy to sharing weights before uploading to the pa-rameter server.We experiment on the Chest X-ray Images(Pneumonia)dataset.Results show that the classification accuracy could arrive at 90% even though applying differential privacy to the sharing weights.2.To protect sensitive data which will be used in object detection,we inject noise satisfied the definition of differential privacy to them.In our scheme,we use a deep learning network to detect objects in blurred data which are injected Gaussian noise.We experiment on the INRIA Person Dataset and use three deep learning networks.Results show that even though adding differential privacy makes images blurred,the deep learning network on edge servers can detect pedestrians in the images with accuracy as high as 97.3%.3.To prevent malicious attacks existed in wireless networks,we design an intrusion de-tection system to detect anomalous traffic.In this scheme,we convert network traffic to images,and use them to train a CNN model for network traffic classification.To address the imbalanced data problem,we utilize generative adversarial networks to generate synthesized samples to balance the number of data between the minor categories and major categories.We achieve our intrusion detection system on the UNSW-NB15 dataset.The results show that the precision of anomaly detection is about 96%.
Keywords/Search Tags:Distributed Deep Learning, Differential Privacy, Intrusion Detection
PDF Full Text Request
Related items