Font Size: a A A

Research On Differential Privacy And Recommender Systems In Federated Learning

Posted on:2023-11-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:W Y LiuFull Text:PDF
GTID:1528307031952119Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Federated learning aims to solve the problem of isolated data islands when building machine learning models.It enables collaborative training without accessing raw data from data owners.However,the parameter exchange process in federated learning lacks formal guarantees.To this end,differential privacy further enforces data protection in federated learning.This thesis considers the relationship among data owners,service providers and model inquirers.It solves accuracy disparity and trust level diversity in differentially private federated learning.It also solves intention protection and“the right to be forgotten” implementation in federated recommender systems.This thesis overcomes the challenges of imbalanced data,hybrid differential privacy,high computational cost,and selective forgetting in federated learning.Developing solutions to differentially private federated learning and federated recommender systems is an important problem for researchers from academia and industry.Because of the above challenges of differential privacy in federated learning and federated recommender systems,this thesis focuses on the accuracy impact parity on differentially private federated learning,hybrid differential privacy-based federated learning,differentially private federated recommendation,and differentially private federated recommendation unlearning.The main research content and contributions are summarized as follows:This thesis proposes a fair differentially private federated learning to mitigate the disparate accuracy impact on the underrepresented and relatively complicated groups of data owners in federated models,which means that the reduction in the federated model’s accuracy is disproportionate for each data owner.This thesis casts the differentially private federated learning procedure as a bilevel programming problem which could integrate self-adaptive clipping threshold in differential privacy with federated learning.It dynamically adjusts the instance influence of each data owner’s local training data depending on the theoretical clipping biasnoise variance bound in federated learning.The experimental results show the effectiveness of the proposed method in mitigating the disparate impact on federated model accuracy among the data owners on several benchmark datasets and scenarios ranging from text to vision and achieving state-of-the-art accuracy and fairness.This thesis proposes hybrid differential privacy-based federated learning to fulfil the different privacy requirements between the data owners to service providers.This thesis divides data owners into two categories according to their different privacy attitudes in federated learning.The data owners who trust the service provider use central differential privacy,and the data owners who distrust the service provider uses local differential privacy to protect their local training data.This thesis analyzes the convergence in differentially private federated learning.It uses the adaptive clipping scheme and proposes an improved composition method to reduce noise addition in federated learning.The experimental results verify that the proposed method satisfies data owners’ different privacy needs and obtains usable federated models.This thesis proposes intention protection in differentially private federated recommendations to hide the intention of data owners from service providers.This thesis finds that the existence of user-item interactions should be protected in federated item recommendations.This thesis enables the service provider to provide preference inference on data owners’ local devices and simultaneously protects data owners’ interaction privacy.It formulates the notion of(α,β)-accuracy to model the noise in implicit feedback and differential privacy.It proves that the proposed method is(α,β)-accurate and (?)-differentially private.The experimental results show that the proposed method achieves comparable precision to non-private federated item recommendations while offering privacy guarantees to data owners in the federated recommendation.This thesis proposes differentially private federated recommendation unlearning to remove the influence of a small amount of the data owners’ local interaction data on the federated recommendation model,which protects the right to be forgotten of the data owners to the model inquirers.In the framework of differentially private federated recommendation,this thesis proposes a warm-start strategy with 2storder optimization methods using the data owners’ local remaining training data to adjust the federated recommendation model to achieve unlearning acceleration.It proposes a new federated recommendation unlearning approach,which divides the optimization problem of unlearning into many small tractable sub-problems to overcome the prohibitively high computational cost of 2nd-order optimizers.It uses differential privacy to protect the interaction privacy of data owners in federated recommendation systems.The experimental results show promising results of the proposed method in terms of consistency(forgetting thoroughness),accuracy(recommendation effectiveness),and efficiency(unlearning speed).
Keywords/Search Tags:Differential Privacy, Federated Learning, Recommender Systems, Accuracy Parity, Machine Unlearning
PDF Full Text Request
Related items