Font Size: a A A

Research On Federated Learning Algorithm For Multi-source Heterogeneous Data

Posted on:2023-02-06Degree:MasterType:Thesis
Country:ChinaCandidate:B C XiongFull Text:PDF
GTID:2558306620985669Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the development of artificial intelligence,people not only pay attention to the amount of data,but also begin to care about the privacy and security of data.More and more countries and regions begin to promulgate laws to protect the privacy and security of user data,which brings unprecedented challenges to the field of artificial intelligence.How to design a machine learning framework to make the system use data from different sources more efficiently and accurately is an important topic in the development of artificial intelligence.Federated learning is a machine learning method in which several clients storing data can collaboratively train on the premise of data security and data privacy.It can guarantee that different clients can collaboratively train without sharing data.One of the most important challenges of federated learning is the problem of data heterogeneity,where the data distribution of clients can differ significantly from one another.Currently,many methods have been proposed to solve the problem of heterogeneous multi-source data,but the following problems still exist:(1)When the client has multimodal data,the difference in data distribution will become larger.Most of the existing federated learning methods rely on single-modal data.Although multimodal data benefits from the complementarity of different modalities,the traditional federated learning method is difficult to solve the problem of multimodal federated learning due to modality differences.(2)When different clients have different modality data,such as partial clients may provide sensor signals while other clients can only provide visual data.Due to the different data modalities of different clients,there are greater differences in data distribution.To overcome the above problems,the main work of this paper is as follows:(1)To solve the problem of multimodal data in the client,this paper studies the multimodal federated learning tasks: This paper defines and studies multimodal federated learning tasks for the first time,and proposes a unified framework for multimodal federated learning,using a co-attention mechanism to fuse complementary information from different modalities.The proposed federated learning algorithm can learn useful global features in different modalities and train a common model for all clients.In addition,this paper uses a personalized approach based on model-agnostic meta-learning to accommodate each client’s final model.Extensive experimental results on multimodal activity recognition tasks demonstrate the effectiveness of the method.(2)To solve the problem that different clients have different modality data,this paper studies the cross-modal federated learning task: This paper defines and studies the cross-modal federated learning task for the first time,and proposes a feature-disentangled network,which has five important modules of altruistic encoder,egocentric encoder,shared classifier,private classifier and modality discriminator.The altruistic encoder aims to collaboratively embed local instances on different clients into a modality-agnostic feature subspace.The egocentric encoder aims to produce modality-specific features that cannot be shared across clients with different modalities.The modality discriminator is used to adversarially guide the parameter learning of the altruistic and egocentric encoders.Through decentralized optimization with a spherical modality discriminative loss,our model can not only generalize well across different clients by leveraging the modality-agnostic features but also capture the modality-specific discriminative features of each client.
Keywords/Search Tags:Federated Learning, Multimodal Representation Learning, Cross-Modal
PDF Full Text Request
Related items