Font Size: a A A

Robust Learning Against Poisoning Attack In Adversarial Environment

Posted on:2018-07-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y ShuFull Text:PDF
GTID:2348330536478579Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Machine learning techniques have been applied to many reality applications due to satisfying performance.However,an attacker who misleads a machine learning system may exist especially in a security related application,e.g.a fake fingerprint is used to cheat a fingerprint recognition system.A robust learning algorithm is important in this adversarial environment in order to reduce influence of attack.One of common attacks,poison attack,is to contaminate a training set of a classifier.As most of knowledge of a classifier is learnt from a training set,poison attack influences the learning process significantly.As a result,how to learn from a dataset which may be attacked is an important research problem.The problems of existing defence method against poison attacks are: samples located in the middle of classes which usually contain useful information in classification may be removed easily by data sanitization,while a robust learning method may not be suitable to be applied to some applications.Therefore,this study proposes a transfer learning based robust learning model to defend poison attack.A classifier learns a sample differently according to its reliability in our model.Most kinds of learning algorithms can be applied in our model.Moreover,different from data sanitization,no sample will be filtered out completely.In this study,the famous transfer learning method,i.e.TrAda Boost,is firstly investigated in an adversarial environment experimentally.Based on the identified problems of TrAdaBoost,four different models are proposed:.1)the weights updating method of the target dataset is revised.The proposed model learn target data by different aspects;2)different from an ensemble classifier is used in TrAdaBoost,a single classifier trained by target and source samples with normalized weights in the last iteration;3)the classifier generated in the final iteration is used as the final classifier;and 4)the combination of the change(1)and(2).All the methods are evaluated and comparied with existing methods in the experiment.The experimental results show our proposed methods achieve better performance in most cases,and confirm that the proposed model is able to enhance the security of a system against poison attack.
Keywords/Search Tags:Poisoning attack, Transfer learning, Robust learning
PDF Full Text Request
Related items