| With the vigorous development of artificial intelligence and cloud computing technology,federated learning has emerged as an effective way to ensure the secure utilization of big data,particularly in addressing the need for privacy protection.However,because the neural networks are difficult to interpret and possess strong information extraction capabilities,private information is inadvertently leaked through neural networks.Based on this fact,model inference attack methods such as membership inference attacks and feature inference attacks pose new challenges for privacy protection in federated learning.To achieve the safe utilization of big data,enhance the privacy,security,and usability of the model in federated learning,and effectively prevent various model inference attack methods,particularly member inference attacks,it is necessary to implement certain measures.This dissertation addresses the security concerns related to model inference attacks in federated learning.It primarily focuses on studying the methods of model inference attacks and security protection methods based on the principles of such attacks.It aims to tackle the new threats posed by feature inference attacks,the weak portability of defense methods against membership inference attacks,the correlation between information inferred by model inference attacks and participants,and achieve efficient privacy protection federated learning with high availability,autonomy,and security.The main work and contributions are summarized as follows.(1)Practical Feature Inference Attack in Vertical Federated Learning.In the framework of vertical federated learning,the application of the model requires the participation of all training participants,including active parties with training labels and other passive parties.Existing feature inference attack methods primarily focus on attacks from more capable active parties who control the top model and a bottom model,neglecting attacks from weaker passive parties who only own a bottom model.This dissertation addresses this limitation by studying the method of feature inference attack by a weaker and vulnerable passive party in a new resource-constrained black-box threat model.The problem of the attack model not being backpropagated is solved by using the zero-order gradient estimation method,ensuring effective training of the attack model.Experimental results demonstrate that with the same new black-box threat model setting,this attack method is equivalent to the attack method under the white box model and significantly outperforms existing black-box attack methods under the new black-box threat model.(2)Privacy-preserving Generative Framework Against Membership Inference Attacks in Federated Learning.As a typical model inference attack method,membership inference attacks can effectively destroy the membership privacy of training data.Aiming to solve the problem that the existing protection methods against membership inference attacks cannot quickly adapt to different training tasks and objectives with much utility loss and can not fundamentally block membership inference attacks,this dissertation proposes a privacy-preserving generative framework against membership inference attacks.Based on the powerful information extraction ability of the generative model and the quantifiable privacy protection ability of differential privacy technology,the training data is confused in the feature space to generate new synthetic data.By using synthetic data for machine learning applications,effective protection of the privacy of real training data is achieved.The experimental results show that the framework can effectively resist the membership inference attacks and has huge advantages compared with the general privacy protection method DP-SGD(Differentially Private Stochastic Gradient Descent)under the same privacy budget setting.(3)Anonymous and Provably Secure Aggregation Protocol in Federated Learning.The secure aggregation protocol is an important tool for securely integrating the information of all participants in federated learning applications.Aiming at solving the problems that the existing secure aggregation protocols cannot achieve anonymity from the perspective of protocol design,cannot destroy the correlation between the client and any information inferred from the model inference,and hide the connection between the aggregation information and the participants,this dissertation designs an anonymous and provably secure aggregation protocol.By associating the implementation of protocol anonymity with the secure two-party inner product calculation,the SPOC(Secure and Privacy-preserving Opportunistic Computing)protocol and the homomorphic encryption algorithm are used to separately realize the fast anonymous secure aggregation protocol and the high-security anonymous secure aggregation protocol.The experimental results show that with a semi-honest threat model setting,adding anonymity to the secure aggregation protocol does not affect the model’s accuracy.In the case of a small number of model parameters,anonymity can be achieved with a small overhead. |