| Generative models are widely used in data-level decentralized knowledge collaboration due to their powerful modeling capabilities,such as addressing the heterogeneity and diversity issues in federated learning.However,learning generative models in a federated scenario is a fundamental but challenging task,as the decentralization learning of generative models itself is privacy-sensitive and communication-intensive,especially for high-dimensional multimedia data.Recent research works have addressed this issue by using privacy and communication-friendly knowledge distillation-based federated generative adversarial networks.This thesis focuses on the common privacy preservation,data heterogeneity,and communication overhead issues in a federated scenario,starting from the knowledge distillation-based federated generative adversarial network approach.The main contributions are summarized as follows:(1)This thesis proposes a federated privacy generative algorithm based on debiased discriminator distillation(D~3-GAN).Theoretical analysis shows that traditional average confidence distillation cannot achieve the global minimum of the min-max game between the discriminator and the generator when data distributions are heterogeneous among federated clients.To address this,the thesis proposesβ-percentile aggregation as a replacement for the average aggregation method.As the average distillation always underestimates the true or fake likelihood of samples,the thesis reduces the distillation bias caused by data heterogeneity and privacy randomness by calibrating the hyperparameterβto a value greater than 50(e.g.,βset to 60).Moreover,to save privacy budget,a method of selecting only informative samples for distillation is proposed,as the student discriminator typically requires a large number of training samples to converge.The non-private and differentially-private experimental results demonstrate that the proposed method can generate satisfactory quality data even in the presence of data heterogeneity,achieving significant improvements compared to state-of-the-art methods.(2)This thesis proposes a federated privacy generative algorithm based on conditional adversarial generator distillation(AG-CGAN).As traditional methods are only applicable to discrete tabular data and simple image data,this thesis further explores the use of conditional generative adversarial networks and adversarial knowledge distillation to better apply the algorithm to complex image data.By constructing an adversarial training framework to improve distillation performance,the framework helps the student model to model and explore a larger data space,thereby improving the quality of the generator’s output.In addition,as class information in the distillation stage is crucial for both data-free distillation and data generation,this thesis replaces the traditional generative adversarial network with a conditional generative adversarial network to provide more accurate and effective guidance for the student model’s learning process.Through comprehensive experimental analysis,this method shows better generation results than other related works on datasets such as MNIST,Fashion-MNIST,CIFAR10,and CIFAR100,demonstrating its good applicability to high-dimensional data. |