Font Size: a A A

A Research On Deep Neural Networks Under Imbalanced Learning Settings

Posted on:2020-03-24Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y GuoFull Text:PDF
GTID:2428330596975100Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Deep Neural Networks(DNN)have achieved State-Of-The-Art performance in numerous problems within recent years.For example,Convolutional Neural Networks(CNN)reached Human-level recognition on image classification problems,while Recurrent Neural Networks perform well on Natural Language Processing tasks.Thanks to its superpower of extracting high-level features from original data,DNNs could surpass traditional machine learning algorithms with a large gap.Yet recent researches point out that DNNs are vulnerable to adversarial attacks incurred by so-called adversarial examples.Imbalanced Learning,due to the lack in data of specific class in datasets,might result in harm to performance of the model.Classical methods to solve it include Up/Down-sampling,cost-sensitive loss functions,etc.Focused on DNNs under the imbalanced settings,we combine CNN with Minimax Probability Machine(MPM)and its variant Biased Minimax Probability Machine,in which CNN could extract classifier-favored features and Minimax Probability Machine could minimize the upper bound of misclassification rate of the model.The contents are divided into two parts shown as follows.To begin with,we combine MPM and CNN as DeepMPM.As MPM was designed to perform linear binary classification,we take One-Vs-All path to divide dataset into binary settings.Classification experiments were performed on N binary tasks,results show DeepMPM reaches comparable performance with CNN.Accuracy changes under adversarial attacks also prove the robustness of the model.Upon the mentioned model,to avoid the shortage of binary settings,we apply multiple MPM classifiers in parallel on the end of CNN,and sum up all losses of those classifiers.After Ensembling upon the shared high-level features,model performs well on multi-class tasks.To improve the accuracy on minority class in binary imbalanced settings,we combine a variant of MPM,Biased MPM(BMPM),with CNN,namely DeepBMPM.By treating two classes differently with pre-determined rate of misclassification lower bound,we optimize the loss with special care on minority class.Experimental results show the improvement on minority class,while maintaining the performance on whole dataset.In order to solve multi-class problems,we modified the Loss function of DeepBMPM from Fractional Programing and took advantages of Lagrangian multiplier.Then by summing up the losses of multiple BMPM,we tested multi-class DeepBMPM and found the improvement on accuracy than multi-class DeepMPM.Adversarial attacks were applied to DeepBMPM and CNN,and the better prediction results demonstrate the robustness of the model.
Keywords/Search Tags:Imbalanced Learning, Deep Neural Networks, Minimax Probability Machine, Adversarial Attacks
PDF Full Text Request
Related items