| With the proliferation of client devices,the growing maturity of the Internet of Things(Io T),and the increasing awareness among people regarding privacy protection,safeguarding user privacy while completing model training has become a critical challenge in the modern Internet system.In recent years,Federated Learning(FL)has emerged as a promising privacy-preserving machine learning technique that enables joint training by the server and multiple clients.FL has made significant advancements and is increasingly being used to address this challenge.Traditional FL hopes to train a global model with generalization performance on the server side.However,due to the non-independent and identically distributed(Non-IID)of data among clients,a solitary global model either struggles to converge or yields suboptimal performance.In order to alleviate the impact of the Non-IID problem,Personalized Federated Learning(PFL)was proposed,which aims to provide personalized models for different clients.Existing research suggests that PFL algorithms face several challenges,including limited performance improvement,complex training steps,and high communication costs.Additionally,there is a fundamental incompatibility between the goals of PFL and traditional FL,where the personalization performance and generalization performance of the model are mutually exclusive.This fundamental incompatibility greatly limits the development prospects and research value of PFL.To address these shortcomings,this thesis first proposes a PFL algorithm based on model fine-tuning and head aggregation.This method enhances the personalization performance of local models on clients through a new fine-tuning training strategy and improves the generalization performance of global models on servers by aggregating model heads.This thesis provides algorithm convergence guarantees under both convex and non-convex conditions and verifies this algorithm’s improvement for model personalization performance and compatibility for global model generalization performance under various data heterogeneity settings.Further considering the impact of intra-class variance among client data on model performance,we propose a PFL algorithm that combines cosine similarity and classifier aggregation module based on this method.On the client side,local models are trained using fine-tuned cosine classifiers; on the server side,the classifier aggregation module aggregates personalized classifiers and generates global classifiers.This thesis uses multiple datasets and different data partitioning strategies to construct various data heterogeneity conditions and experiments respectively on model personalization performance,generalization performance,and algorithm transferability.The experimental results show that each module in this algorithm achieves ideal performance and can be easily transferred to other baseline algorithms,achieving a win-win situation for local model personalization performance and global model generalization ability. |