Font Size: a A A

Research On Key Technologies In Secure Federated Learning And Privacy Inference Computing

Posted on:2024-03-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:J LiuFull Text:PDF
GTID:1528307340474384Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
In the era of digital transformation,the rapid development of big data and cloud computing has accelerated the widespread application and penetration of artificial intelligence(AI)into various fields.Machine learning,as the mainstream approach to implement AI,generally provides various intelligent inference services to the public through AI models,which mainly includes two stages:model training and model inference.Firstly,in the model training stage,this article adopts federated learning to learn and analyze a large amount of sample data,constructing an AI model with inference ability.Then,in the model inference stage,the deployed AI model provides various intelligent services through inference computing to meet user demands.To reduce the cost of machine learning applications and the delay of service response,an increasing number of AI models are being migrated to the cloud and edge clouds for auxiliary training and inference.This takes advantage of the powerful storage space and computing power of cloud servers to process massive amounts of data and intensive computing tasks,as well as leveraging the proximity to users in edge clouds to enhance real-time data processing and service delivery.However,although cloud computing and edge computing provide substantial assistance,they also make security and privacy issues in federated learning and inference computing more sensitive.Specifically,in the model training based on federated learning,more and more personal privacy data are collected and analyzed,so the protection of data and privacy becomes particularly important.Meanwhile,malicious users may manipulate data to affect and control the trained model,so that the model’s decision is biased,and then the security of the service is threatened.In inference computing of model inference,the AI model is the intellectual property of the service provider,so the model also needs protection in addition to protecting the requested data.Moreover,the outsourcing of services makes the inference computing process lose control,which leads to the credibility of service results becoming another problem in the service.Therefore,secure federated learning and privacy inference computing have become important challenges in the development of machine learning.In order to achieve secure and efficient intelligent services,this article studies the key technologies of secure federated learning and privacy inference computing in cloud environment,covering the entire process from model training to model inference.In the model training stage,dual-layer protection and adaptive fault-tolerant federated learning scheme is first proposed to ensure security of data and gradients.Then,privacy-preserving federated learning against poisoning attacks is designed to ensure the security of the training model in the presence of malicious users.In the model inference stage,an efficient and accurate framework for privacy-preserving inference computing framework is first designed to efficiently provide services while protecting private information.Then,a verifiable and privacy-preserving inference computing scheme is proposed to ensure the reliability of services in cloud environment.Through the above research,solve the problems they face such as privacy disclosure,poisoning attacks and fake services,improving the security and credibility of artificial intelligence systems implemented by machine learning.The main researches and contributions of this article are as follows:1.Federated learning has become a secure and efficient model training approach.However,the federated learning schemes in cloud edge architecture realize the security of the user layer by protecting the gradient uploaded by the user,but ignores the security protection of the aggregated gradients of the edge layer,leading to gradient leakage,illegal trading,and abuse.Therefore,this article proposes dual-layer protection and adaptive fault-tolerant federated learning scheme.First,a key construction method of layer-by-layer splitting is implemented through secret sharing,which assigns different keys to edge servers and users,making their individual ciphertext unable to be decrypted,achieving dual-layer protection of gradients.Secondly,we design an adaptive fault-tolerant gradients aggregation method,which can still ensure the correct aggregation of gradients when some users encounter faults.Security analysis and extensive experiments shown that this scheme enhances the security of federated learning in cloud edge through two-layer gradients protection and improves its efficiency compared to the current latest schemes.2.Federated learning relies on users uploading gradients to achieve model training.During this process,there are not only privacy leaks to be concerned about but also threats from poisoning attacks.A large number of privacy-preserving federated learning schemes have been proposed,among which the mask-based approach has certain advantages in terms of efficiency and function.However,there is no effective method to detect poisoning attacks under this approach.In order to address this challenge,this article proposes a privacy-preserving federated learning against poisoning attacks.Firstly,collinear mask is used to protect the gradient privacy of users.Then,the scheme utilizes cosine similarity and collinearity verification to detect and check the gradients and masks,identifying poisoning attacks launched by malicious users.Finally,we resist the poisoning attacks by eliminating or reducing the weight of malicious gradients in the aggregation.Security analysis and experiments show that it can effectively detect and mitigate poisoning attacks,and its efficiency is better than the existing privacy protection detection works.3.In the model inference,since the complexity and intensity of the AI model computing,service providers tend to deploy the model to the cloud for inference computing.In order to avoid privacy leakage,many privacy protection service schemes are proposed,but they cannot balance accuracy and efficiency when dealing with non-linear function computing in the AI model.To address the issue,this article proposes an efficient and accurate framework for privacy-preserving inference computing framework.Firstly,the service model and user data are protected by different data splitting methods to provide a secure multi-cloud deployment mode for participants.Then,a privacypreserving non-linear function computing method is designed for cloud servers to support secure inference computing between servers.It also protects the response process of cloud server to users,achieving the recovery of private service results.Security analysis and experiments prove that the framework can achieve efficient and accurate inference computation on the basis of privacy protection,and show the practicability and effectiveness of the framework.4.Inference computing relying on cloud platform can meet the real-time requirement of intelligent services,while the unreliability of cloud platform leads to the increasing demand for privacy protection and inference correctness.However,current works cannot provide both privacy protection and an effective verification mechanism for inference computing.To solve this problem,in the cloud service architecture with dishonesty and malicious behavior,a privacy-preserving and verifiable inference computing scheme is proposed for the most basic linear computation in the AI model.In the scheme,secret sharing and blind method are used to protect privacy,and secure linear computation in vector and matrix form is realized.Meanwhile,bilinear mapping and matrix digest are used to verify the service results,ensuring the correctness and reliability of the service.Security analysis and experiments based on real data sets demonstrate the confidentiality and reliability of this scheme,as well as its good service efficiency.
Keywords/Search Tags:Federated learning, Inference computing, Privacy protection, Data protection, Verifiable computing
PDF Full Text Request
Related items