As a popular privacy-preserving deep learning architecture,federal learning(FL)has attracted extensive attention in recent years.In the collaborative training scenario of multiple training participants,FL allows clients to perform training locally and only upload the training gradients,effectively avoiding the disclosure of private data caused by traditional centralized learning.However,FL is still facing a series of security and privacy issues.Firstly,since the local training process of participants in FL is invisible,malicious participants may poison the global model by injecting well-designed poisoning samples or directly modifying the uploaded gradient(i.e.,poisoning attacks).In addition,recently works show that the attacker can still infer the privacy of participants through the gradients and the global model.To solve the above problems simultaneously,this thesis analyzes and summarizes the characteristics and application scenarios of the existing federal learning privacy protection scheme.The specific work is as follows:As the existing federal learning protocol is difficult to simultaneously solve the problem of privacy protection and poison attack detection,this thesis has proposed a privacy-preserving federated learning framework EPPFL.By incorporating the idea of the proxy re-encryption(PRE)and a novel shuffle protocol based on the Chinese Remainder Theorem(CRT),the EPPEL guarantee the privacy of participants’ models.In this way,it can be ensured that no entity can obtain the original model of participants or their corresponding identity information simultaneously.In addition,a novel global model masking algorithm is introduced to ensure the global model’s privacy without affecting the participant’s training procedure.On the premise of privacy protection,a model evaluation scheme is used in the proposed framework to detect poisoning attacks.Finally,this thesis provides the security proof of EPPFL and proves that EPPFL can effectively resist two typical poisoning attacks through experiment.This thesis proposes a new source anonymous data shuffle scheme Re-Shuffle by combining the idea of oblivious transfer and secret sharing.Re-Shuffle adopts the oblivious transfer protocol to solve the problem that EPPFL is difficult to protect the gradient anonymity of participants under collusion attacks.The participant negotiates the unique data slot location with the server through oblivious transfer and realizes conflict detection through a single round of data interaction with other participants.In this way,each participant can only obtain her/his data slot,which is invisible to the parameter server.Besides,Re-Shuffle adopts the secret sharing protocol to ensure that the dropout of participants in the data collection phase is recoverable.Finally,this thesis provides security proof and evaluates the communication and computing costs of the Re-Shuffle. |