Font Size: a A A

Research On Model Security And User Data Privacy Protection In Federated Learning Architecture

Posted on:2022-09-24Degree:MasterType:Thesis
Country:ChinaCandidate:C A ZhouFull Text:PDF
GTID:2518306752499624Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Federated learning is a distributed architecture with data isolation as the central idea,which has been widely concerned in the field of machine learning.Under the federated learning architecture,the central server continuously receives and aggregates the local model parameters uploaded by clients to train the global model.However,even though the raw data is not transmitted directly,malicious clients can still upload the designed model to degrade the system performance,or even make it completely unconvergent.In addition,the server may also steal user data privacy by reverse-analyzing the uploaded model.This paper focuses on the security and privacy of the federated learning system and obtains the following results:(1)Aiming at the problem of model poisoning attack in federated learning,the convergence of system loss function under two attack modes(parameter inversion and noise perturbation)is studied.Through theoretical analysis it is found that when there are malicious clients and the total number of computing iterations of clients is fixed,there is an optimal number of local training iterations to achieve the best performance of the system.When the proportion of malicious clients remains constant and the total number of clients increases,the attack capability of parameter reverse remains unchanged,while the influence of noise decreases.The simulation results verify the correctness of the theoretical analysis.(2)Three different algorithms are proposed to defend against model poisoning attack in federated learning.First,The pre-test data set is established to detect the quality of the parameters uploaded by the client;Second,the correlation between the model parameters uploaded two times by each client is calculated to judge whether the client has handled it actively;Finally,determine whether the model parameters uploaded by the client are normal with deep neural network detector.The simulation results show the performance of these three methods in the face of different attacks.(3)In order to protect the privacy information of clients,the methods of differential privacy is introduced into federated learning,and a privacy preserving algorithm based on Gaussian mechanism is designed.Before uploading parameters,the client adds Gaussian noise according to certain standards to blur and hide its own information and ensure the system privacy level.At the same time,the convergence of the federated learning system with this mechanism is analyzed,and it is found that there is a trade-off between the privacy and usability of the system.The simulation results verify this phenomenon.
Keywords/Search Tags:federated learning, security and privacy, model poisoning, convergence analysis, defense mechanism
PDF Full Text Request
Related items