Font Size: a A A

Federated Generative Adversarial Optimization Algorithm For Non-IID Data

Posted on:2022-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:Y HuangFull Text:PDF
GTID:2518306569975849Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,data has become more transparent and open.The privacy and security in has received widespread attention in big data era.Data privacy has become a barrier to machine learning,and needs to be resolved urgently.Federated learning is one of distributed machine learning framework that can train independently on each client.Federated learning has wide range of application scenarios.Participated clients can train simultaneously without transmitting and exchanging data,and have good protection of data privacy.Federated optimization algorithm is the core of federated learning.FedAvg proposed by Google is a typical federated learning algorithm.In each round,each clients trains locally and transmit the model parameters to the server for parameter aggregation.FedAvg compromised accuracy and training speed by using parameter aggregation instead of gradient aggregation to complete the federation modeling.Almost all of federated optimization algorithm based on this idea.However,in the application scenario of federated learning,the data distribution of each client is extremely unbalanced.Then the gradients become inaccuracy and then get poor performance.For tackle the gradient deviation caused by No-IID data,this paper propose a novel federated optimization algorithm Fed Gen CAM,which introduce generative adversarial network AC-GAN into the federated learning framework.Proposed model have generation capabilities and category aggregation mechanism that can perception of the available categories in generator.Each client completes natural category expansion based on the generating capabilities in generator,and then can generate samples of all categories to alleviate the problems caused by the uneven distribution of data categories.Moreover,two points are proposed to reduce the gradient deviation.One is a new category loss function that can obtain a more balanced category gradient.The other is category integration that can reduce the gradient caused by average parameter aggregation.Experiments conducted on MNIST,FEMNIST,Shakespeare and Sentiment140 show that the convergence speed is 1.2 times then FedAvg.When achieving the same classification accuracy,proposed model saves nearly 20% of the time,the classification accuracy is 3-5%higher than FedAvg on average,and successfully reduces the gradient deviation in training step,then finally address the problem of Non-IID data.
Keywords/Search Tags:Federated Learning, Generative Adversarial Network, Gradient Deviation, Class Aggregation
PDF Full Text Request
Related items