Font Size: a A A

Research On Key Technologies Of Privacy Protection In Federated Learning Based On Collaborative Training

Posted on:2023-02-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y CaoFull Text:PDF
GTID:2558306905986889Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In the in-depth research of artificial intelligence machine learning and the use of distributed big data technology,the process of data collection and processing has played a great role,so that it is inevitable that a large amount of data is used and privacy is leaked.In order to ensure privacy after being leaked,the research and use of federated learning technology has improved the protection of privacy.However,through the research of some scholars,it is found that the federated learning itself also has some privacy leakage problems that still need to be resolved.For example,when the participant or the central server itself is not trust worthy,the generated global model may be damaged,or the privacy of the participant may be compromised during model training,or the malicious participant may collude with the server to collude with the uploaded gradient data create a threat.In order to solve these problems above,the thesis studies the key technology of privacy protection in federated learning based on the collaborative training.The main contents are as follows:Aiming at the problem of privacy leakage during model training caused by honest and curious servers,a collaborative federated learning framework is proposed.Under the condition of ensuring the privacy protection of the federated learning,the framework is based on the unitized stochastic gradient descent algorithm to update the selective parameters.The unitized stochastic gradient descent algorithm has the effect of protecting the gradient.The algorithm is executed in the training process of each participant,and the parameter selection mechanism is used to update the parameters during the parameter upload and download process between the participants and the server.It is verified through experiments that the proposed framework makes the server unable to infer explicit information from the model,which further improves the degree of privacy protection while ensuring the accuracy of model training.On the basis of the above research,aiming at the collusion attack between the server and malicious participants,a privacy protection technology based on the combination of differential privacy and homomorphic encryption is proposed.This model enhances the protection of the original data by using Gaussian noise addition before encryption,and completes the improvement of the basic homomorphic encryption algorithm.While ensuring that the gradient information is not leaked in the form of plaintext,it enhances the privacy of the original model,and other performance of the model is not affected.Experiments show that the proposed model can better solve the privacy exposure problem caused by the collusion attack between participants and the server.Therefore,this article aims at the above two issues,strengthening the privacy protection of federated learning itself to improve security,not only has theoretical research significance,but also has practical application value.
Keywords/Search Tags:Privacy protection, Federated learning, Stochastic gradient descent, Differential privacy, Homomorphic encryption
PDF Full Text Request
Related items