Font Size: a A A

Research On Adversarial Attack And Defense For Malware Detection Model

Posted on:2024-09-19Degree:MasterType:Thesis
Country:ChinaCandidate:K LiFull Text:PDF
GTID:2568307100473474Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
With the rapid development of Internet technology,the propagation cost of malware in cyberspace has decreased,and a large number of malware in cyberspace has brought great challenges to social security.Deep learning technology has made significant progress in image classification,object detection,speech recognition,and recommendation systems.Combining deep learning technology for malware detection has become an important research direction.However,recent studies have shown that deep learning models suffer from insufficient robustness and are vulnerable to adversarial attacks with adversarial examples.Attackers can trick deep learning models by generating adversarial examples.Taking the deep learning malware detection model as an example,an attacker can generate adversarial malware with the same malicious function as malware,so that the detection model classifies the adversarial malware as benign software,that is,malware that evades detection.Currently,research on adversarial malware is conducted primarily from the perspective of attackers and defenders.From the perspective of attackers,there are still some problems to be solved in the research of adversarial attacks for malware detection models.First,when the adversarial attack is implemented in the black box scenario,the adversarial malware generated by the existing methods loses its functionality,and the perturbation injected is large,and the success rate of the attack is low.Secondly,when implementing adversarial attacks in white box scenarios,due to the discreteness of malware,the success rate of adversarial malware attacks generated by existing methods is not high,and the generation process takes too long.From the perspective of defenders,there are still some limitations in the research of adversarial defense for malware detection models.Existing adversarial defense research is mainly based on image and text domains,which means that existing adversarial defense methods cannot be directly applied to adversarial malware defense.To solve the above problems,this paper proposes two adversarial attack methods for the end-to-end detection model of deep learning malware,which are applied to black-box and white-box scenarios respectively.In addition,this paper proposes an adversarial defense method to defend against black-box and white-box adversarial malware attacks.The main work is as follows:1.Aiming at the problems that adversarial malware functionality cannot be preserved,too much perturbation is injected,and the success rate of adversarial attacks is low in black-box attacks,this paper proposes an adversarial malware black-box attack method based on genetic algorithm.This method describes the adversarial malware generation problem as an optimization problem,enhances the adversarial perturbation capability through genetic algorithm,and injects perturbation combined with function-preserving manipulations to generate adversarial malware.The adversarial capability of perturbation is enhanced,the amount of perturbation injections is reduced,and the success rate of adversarial example attacks is improved.Experiments show that this method can generate adversarial malware with low perturbation and functional preservation in black box scenarios,and compared with existing methods,the adversarial attack success rate of this method is increased by an average of 56.4%.2.Aiming at the problems of low success rate of adversarial malware attacks and long timeconsuming generation process in white box attacks,a gradient-based adversarial malware white box attack framework is proposed.The framework updates input perturbation through iteration of the inverse gradient descent method,and uses function-preserving manipulations to inject perturbation to generate adversarial malware.The inverse gradient descent method is an efficient perturbation enhancement idea that shortens the time to generate adversarial examples.In addition,the framework is applied to the grey image malware detection model and Mal Conv model,and FGAM(Fast Generating Adversarial Malware)and GAMBD(Generating Adversarial Malware Based on Gradient)are proposed respectively to implement adversarial malware white box attacks.Experiments show that this method can efficiently and quickly generate adversarial malware in white box scenarios.Compared to existing methods,FGAM improves the success rate of adversarial attacks by an average of 84%.Compared to existing methods,GAMBD can achieve a100% success rate for adversarial attacks,and the average time to generate adversarial malware is shortened by 30 times.3.Aiming at the problem that existing adversarial defense research cannot be directly applied to adversarial malware defense,an adversarial malware defense method ATWM(Adversarial Training for Windows Malware)based on adversarial training is proposed.First,simple adversarial perturbation is filtered through preprocessing to defend against simple adversarial example attacks and avoid the loss of accuracy caused by adversarial training.Secondly,combined with the structural characteristics of malware,this paper uses diversified adversarial examples for adversarial training to improve the adversarial defense capability of the malware detection model.Experiments show that ATWM improves the adversarial defense capability of the model,and the adversarial defense capability of the model is increased by an average of 16.7% under the premise that the accuracy of the model does not decrease.
Keywords/Search Tags:Deep Learning, Adversarial Examples, Adversarial Malware, Adversarial Attack, Adversarial Defense
PDF Full Text Request
Related items