Font Size: a A A

Research On The Poisoning Attack Target The Polynomial Regression Model And Its Defense

Posted on:2021-08-24Degree:MasterType:Thesis
Country:ChinaCandidate:X LiuFull Text:PDF
GTID:2518306548495654Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
With the extensive application of artificial intelligence(AI)technology in the eco-nomic,social,national security and other fields,the security of AI has received more and more attention.The largescale intelligent network attacks and spear phishing attacks us-ing AI technology have become a phenomenon that can't be ignored,and have attracted the attention of mainstream network security companies.Recent researches show that the core component of artificial intelligence systems,machine learning algorithms,has potential security threats.Therefore,current researches on AI security mainly focus on the field of machine learning security.Among various security threats,poisoning attack can effectively destroy machine learning models by injecting poisoning samples into the training set,and is the main research objects in the field of machine learning security countermeasures in recent years.This paper proposes poisoning attack methods target on the model parameters and model capacity of the polynomial regression model,and further proposes defense ideas.Specifically,the main contributions of this article are the following three aspects:(1)Two poisoning attack methods target polynomial regression model parameters are proposed in this paper.The prediction error of the machine learning model will change if the model parameters changed.In this poisoning attack method,the attackers make poisoning samples to influence the model training stage,so that the abnormal parameters are output at the end of the model trainings,and the prediction error on the test set is high(called test error).Among them,the main factor affecting the test error is the poisoning sample.In order to improve the attack ability of poisoned samples,this paper proposes two poisoning sample optimization strategies: approximate expansion strategy and gradient ascent strategy.Both strategies can improve the attack performance of poisoned samples,and further increase the training error when the number of poisoned samples is unchanged.(2)The security confrontation problem of model capacity was discussed for the first time in this paper.The model capacity expresser the fitting ability of machine learning model.A large capacity may cause the model to overfit,while a small capacity may cause the model to underfit.The model capacity is controlled by some hyperparameters of the machine learning model.If the hyperparameter selection process can be influenced or controlled by a malicious attacker,changes to the model capacity will have a great impact on the predicted performance of the model.This paper proposes a model capacity poison-ing attack strategy for polynomial regression model: degree confusion attack.This attack method affects the hyperparameter selection process by injecting poisoning samples,and ultimately improves the test error.(3)Defensive countermeasures against poisoning of model capacity are proposed in this paper.A lot of work has been devoted to reduce the impact of traditional poisoning attacks,but there is no defensive strategy for model capacity yet.This paper proposes an extended DBSCAN method based on the distribution characteristics of poisoning samples.Experiments show that this method can reduce the impact of poisoned samples to a certain extent,and can provide guidance for the future model capacity security.
Keywords/Search Tags:Polynomial regression model, poisoning attack, model pa-rameters, model capacity, machine learning confrontation
PDF Full Text Request
Related items