Font Size: a A A

Robustness Of Intrusion Detection Methods In Adversarial Environment

Posted on:2019-11-16Degree:MasterType:Thesis
Country:ChinaCandidate:Y F DongFull Text:PDF
GTID:2428330611493438Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
At present,various new technologies and new applications are constantly being updated and iterated,and entered everyone's life through various applications on smartphones.These innovative applications are inseparable from the interconnection and intercommunication of the network.The rapid development of network technology makes network security problems especially prominent.The attack methods on the network are increasingly diversified.The environment of secure network is the cornerstone of the sustainable and healthy development in this period.The rapid development of the network has led to an explosive growth of network data.Traditional defense and detection methods of network security,such as firewalls and traditional IDS,are difficult to play an effective role in the huge amount of data.Therefore,most of the new IDS begin to be combined with machine learning algorithms.Applying machine learning algorithms to the data analysis and processing of intrusion detection systems is effective.Practice has proved that the combination of IDS and machine learning algorithms improves the detection efficiency of the system,makes the IDS more intelligent,and reduces the false negative rate and false positive rate.However,with the in-depth application of machine learning in various fields,it has brought convenience to our lives,but the safety of machine learning itself has also been exposed.This poses potential security risks for IDS using machine learning algorithms.(1)In view of the security threats faced by machine learning algorithms in the adversarial environment,this paper mainly describes the impact of the attack on the IDS caused by the vulnerability of the machine learning algorithms in the adversarial environment.Firstly,the hypothesis of the adversarial model in the adversarial environment is studied.The related classification of machine learning and the adversarial example generation algorithm of the feature selection algorithm are studied.According to the influence of the adversarial example poisoning,we analysed the feature selection algorithm model and proposed a new attack model.We adopted an attack method of gradient poisoning.The examples which generated by adversarial generation algorithm are used to induce machine learning algorithms.The feature selection algorithms in the adversarial environment are deeply studied from the classification error rate,feature selection subset consistency.We used three classic Embedded and one Filter feature selection algorithms as our target algorithms.We conducted experiments on four data sets KDD99,NSL-KDD,Kyoto 2006+and WSN-DS.The results show that our adversarial examples can can effectively affect accuracy of feature selection,and make the accuracy of sample classification significantly reduced.(2)Research on defense methods in the adversarial environment.In this paper,two methods are used for research.The first one is the proactive defense mechanism that is more common in the defense against the adversarial examples,including the two methods:adversarial examples learning and data cleaning.The second defense method is based on the Moving Target Defense(MTD)approach.Based on the proactive defense method and the idea of MTD,we propose the MTD-based Moving Target Defense for Feature Selection(MTDF).By configuring multiple MTDF defense strategies,we can switch between different models.Combination of proactive defense methods and feature selection algorithms to achieve higher robustness.Through experimental verification in two data sets of NSL-KDD and Kyoto 2006+,the impact of the adversarial examples on the classification accuracy of the system is significantly reduced by using MTDF,that is to say,the classification accuracy can still maintain a relatively stable level under the influence of the adversarial example.Therefore,using this defense model can effectively improve the robustness of the IDS.In summary,this thesis conducts an in-depth study on the attack and defense methods of the feature selection algorithms in the adversarial environment.Aiming at the vulnerability of the feature selection algorithm,a feature selection and poisoning attack algorithm based on gradient optimization is proposed.We verified the effective of attack and defense methods by experiments.Based on this,the defense methods of feature selection algorithm is studied in the adversarial environment.The MTD-based feature selection algorithm defense model is proposed and verified by experiments.We realized the security analysis of the machine learning algorithms in the adversarial environment.
Keywords/Search Tags:Intrusion Detection, Adversarial Sample, Feature Selection, Poisoning Attack, Adversarial Learning
PDF Full Text Request
Related items