Font Size: a A A

Research On Attack And Defense Method Of Malware Classifier Based On Adversarial Principle

Posted on:2021-01-30Degree:MasterType:Thesis
Country:ChinaCandidate:R WuFull Text:PDF
GTID:2428330611999760Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the success of artificial intelligence in the field of image and n atural language processing,researchers have gradually applied machine learning technology to the field of malware detection and have achieved good results.However,from 2006 on the “intoxication” of machine learning model test data to the 2014 adversarial examples,more and more researchers have worried about the safety of machine learning models.The most effective defense against the adversarial examples is to train the model through a combination of multiple adversarial examples and the original sample.This type of defense is called adversarial training.It can be found that the adversarial examples can be used as a means of attacking the model,but also it can also be a channel to improve the robustness of the model against attack.In detecting malware using deep learning technology,the researchers used the gaps in the tail of the malicious code and combined the loss maximization to perturb the original examples to generate adversarial examples,but this method is limited by the size of the sample and takes a long time to generate.How to reduce the cost of adversarial examples generation in the field of malware detection,and how to generate more different kinds of adversarial examples is of great significance to the model to improve its robustness to defend attack.In view of the characteristics of the PE file,the interpretability of the model and the shortcomings of the existing adversarial example generation algorithms,this paper proposes the adversarial examples generation algorithm for the black-box scene and the white-box scene.This paper first locates the new interference value addition location by finding the redundant space of the executable file,and first-order moment estimation and second-order moment estimation are added to the fast gradient sign method to solve the coupling problem between single-step and iterative attacks.In the white-box scenario,the discriminative features of the benign examples are calculated by the model as the disturbance value,and a new block is added to the executable file as the position where the disturbance value is added.This paper also analyzes the influence of the packed examples on the model,and uses two different methods of compression and encryption to pack the examples.It is found that the two packed examples reduce the accuracy of the model to different degrees.In the end,this paper chooses the appropriate adversarial examples from the two aspects of L2 norm and discriminant features,and combines the packed examples and the robust feature examples to improve the model's robustness to against attack.This paper validates the proposed adversarial example generation algorithms on two unpacked datasets,mainly from the success rate,time,and L 2 norm of the perturbation values of the adversarial examples.These algorithms are no longer limited by the example size,and the adversarial example generation algorithms in both black and white box scenarios can effectively improve the success rate of adversarial example attacks.This paper also validates the accuracy of the model on the packed data set.Finally,after comparing a variety of different adversarial samples and packed samples against the target model,and experimentally observed the changes in the accuracy and robustness of the target model.It proves the effectiveness of this paper's defense method against the target model to improve the robustness of the target model.
Keywords/Search Tags:malware detection, adversarial example, robustness
PDF Full Text Request
Related items