| With the rapid popularization of Io T and the continuous development of artificial intelligence,data have become another independent factor of production after labour,land,capital and technology.Developing the digital economy and encouraging data sharing among various fields has become one of the important trends in the data analysis field.However,data owners are reluctant to share local data for reasons of data privacy and security.Federated learning can train data analysis models using data from all parties without the original data,partially addressing the security and privacy issues in data sharing.Many researchers have proposed various types of attacks against federated learning.To defend against these attacks,researchers generally use techniques such as homomorphic encryption and function encryption to protect model parameters in the training phase.These encryption schemes do not protect the aggregated weights in the model aggregation process.Attackers can tamper with aggregation weights to steal users’ training models.Encryption schemes usually also require key managers to generate and manage keys.If the key managers conspire with the server,they can lead to key leakage and the server can easily access the user’s model.Federated Learning relies on a half-honest central server to securely aggregate models.Failure of the server can undermine the reliability of the federated learning.To address the issues mentioned above,this paper researches security aggregation methods in federated learning,and summarizes the main research and contributions as follows:(1)To address the risk of inference attacks caused by aggregated weight leakage,this paper proposes a weight-hiding federated learning secure aggregation method.Attackers or malicious servers cannot obtain information such as aggregated weights.This paper also applies Bayesian differential privacy and sparse differential gradients to reduce noise perturbation in the model.Experiments demonstrate that this method can effectively prevent attackers from maliciously tampering with the aggregated weights and improve the accuracy of the federated learning model.(2)To address the risk of collusion attacks between the key manager and the server.This paper proposes a secure aggregation method using decentralized multi-user function encryption.The function decryption key in this method is derived from the user and the server,and does not require a key manager.This paper also designs and incentive mechanism combining differential privacy to encourage users to actively participate in federated learning.Experiments demonstrate that this method can effectively prevent the server from conspiring with the key manager and encourage users to submit high-quality models.(3)To address the single point failure of half-honest servers in Federated Learning.This paper proposes a blockchain-based federated learning security aggregation method.All participants can aggregate models through a consensus mechanism.All training models and aggregated models are recorded in blocks.Participants can all track and verify training progress,improving the reliability of the Federated Learning System.Experiments demonstrate that this method can effectively improve the reliability of federated learning. |