Font Size: a A A

Research And Implementation Of Federated Learning That Supports Aggregation Under Multiple Keys

Posted on:2021-09-15Degree:MasterType:Thesis
Country:ChinaCandidate:H GuoFull Text:PDF
GTID:2518306569994539Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the continuous development of the big data technology,the data privacy and data island have become the main bottleneck restricting the development of artificial intelligence.Federal learning technology ensures that data does not leak from the local repository,and then build the model through the model aggregation method under the federal security multi-party computing to solve these two problems.However,the following problems still exist in the current federal learning framework.For the method of model aggregation without safety measures,the central node can directly see the model's parameters,and then deduce the participants' private data.For the aggregation method based on single key homomorphic encryption,each participant holds the same key,and the leakage of a single user's key means the leakage of the whole system.In this case,with the increasing numbers of the participant,the probability of the key leakage in the whole federated learning system is also increasing.If there exists the malicious node in the participant,the key held by the malicious node can be used to decrypt the model parameters of the other participants.In this thesis,the data lifecycle of the traditional machine learning process is described and analyzed in detail.The privacy requirements of each stage in the data flow and the role of the federated learning in the privacy protection are discussed.This thesis also proposes an attacking method based on the security problem of model aggregation method and privacy protection method in the existing federated learning framework.Under the condition of limited ability and knowledge,the attack method can change the aggregated model into a malicious model.In order to avoid the above attack and solve the security problems of the existing federated learning framework,this thesis proposes a federated learning scheme that supports model aggregation under multiple keys.each participant holds the different key so that even if the key of a single participant is leaked,the security of the whole system will not be affected.In addition,even if there are malicious nodes in the participants,they can not use their own key to decrypt other participant's model parameters.The scheme relies on the existing client server federated learning framework and the double trapdoor homomorphic encryption.Each participant solves the machine learning model on the local data,and then adopts double trapdoor homomorphic encryption algorithm.The local model is encrypted with its corresponding public key and uploaded to the cloud.With the help of dual cloud,the encrypted model under different public key is aggregated to ensure that the cloud cannot get the aggregation result of the plaintext,and then the aggregation result is returned to the party for decryption.Compared with the existing federal learning programs,this method can effectively prevent the occurrence of data privacy disclosure caused by key leakage or malicious nodes in a participant.The method not only guarantees the privacy of participants' data and models,but also have stronger generalization ability because the final model integrates multiple participants' data.
Keywords/Search Tags:federated learning, homomorphic encryption, privacy preserving, multiple keys
PDF Full Text Request
Related items