Font Size: a A A

Research On Generalization Of Adversarial Models For Imbalanced Data

Posted on:2024-05-18Degree:MasterType:Thesis
Country:ChinaCandidate:Y F WangFull Text:PDF
GTID:2568307067973079Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,adversarial training has become a widely used defense method against adversarial attacks on deep learning models,in order to resist adversarial examples.However,most current research on adversarial training is focused on balanced datasets,where each class has the same number of training examples,and relatively little research has been conducted on adversarial training on imbalanced datasets.Data imbalance can result in fewer examples for minority classes,and attackers often target the minority classes for attacks,which can affect the generation and selection of adversarial examples.When performing adversarial training on imbalanced datasets,the model often focuses more on the classes with more examples,which can lead to a decrease in the recognition ability of the classes with fewer examples.Therefore,this paper conducts in-depth research on how to effectively train adversarial models on imbalanced datasets to improve their classification performance on minority class examples and the generalization of adversarial attacks.The research content and contributions of this paper are as follows:(1)A balanced Softmax cross-entropy loss-based adversarial training scheme(Balanced-SCEL)is proposed.In the face of imbalanced datasets,the sample balance assumption in traditional adversarial training methods no longer holds,resulting in a lack of sufficient sample numbers when generating adversarial examples,making it difficult to accurately reflect the distribution of real data.The difference in distribution between the training set and the test set will lead to imbalanced adversarial training,which will result in poor performance of deep learning models on minority classes.Therefore,this paper proposes Balanced-SCEL to dynamically adjust the class weights to adapt to different data distributions.Specifically,we adjust the sample weights based on the frequency of occurrence or sample importance of each class in the training set to balance the weights between classes,so that the samples of minority classes receive more attention during training,preventing the distribution shift caused by adversarial example attacks and improving the model’s generalization performance on minority classes.(2)A Mixup and loss reweighting-based adversarial training scheme(Mix-RCEL)is proposed.To address the overfitting problem of adversarial training models in imbalanced data scenarios,we solve this problem from both data and model perspectives.From a data perspective,we use Mixup technology to generate a batch of virtual samples,making the samples of different classes more balanced.From a model perspective,we use loss reweighting strategy to set different weights for samples of different classes,so that the model pays more attention to the samples of minority classes,thereby improving the model’s performance on imbalanced datasets.In addition,in order to better utilize the loss reweighting strategy in Mix-RCEL,this paper adopts a delayed reweighting training plan,which can improve the robustness of the model to adversarial samples,reduce the sensitivity of the model to adversarial attacks,and improve the model’s generalization performance on the test dataset.
Keywords/Search Tags:Adversarial Examples, Adversarial Training, Data Imbalance, Mixup Technique
PDF Full Text Request
Related items