Font Size: a A A

Research On A Trusted Artificial Intelligence Defense Method For Malware Against Adversarial Attacks

Posted on:2021-02-06Degree:MasterType:Thesis
Country:ChinaCandidate:M L ZhangFull Text:PDF
GTID:2428330629984467Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Driven by economic benefits,security incidents related to malware are increasingly frequent.According to AV-Test's malware research data,an average of more than 250,000 new malware samples are discovered every day.At the same time,malicious code is rapidly evolving.Faced with an increasingly complex environment,manual analysis is difficult to deal with such a large amount of malware in time.Therefore,artificial intelligence technology is widely used in malware analysis and detection systems.People begin to widely-use artificial intelligence to train a large number of malware samples detecting models.However,with the widespread application of artificial intelligence models,the credibility of artificial intelligence itself has been challenged,making it a weak link in the system.Among the challenges faced by artificial intelligence,the problem of adversarial samples is particularly severe.Traditional machine learning models are mostly based on an assumption: the training data and the test data follow the same distribution.But when rare samples or even maliciously constructed abnormal samples are input to the machine learning model,it may cause the machine learning model to output abnormal results.By utilizing the fact to construct an adversarial sample,attackers can indirectly interfere with the inference process of the artificial intelligence model to achieve attacks such as evasion detection.Attackers do not need a lot of information of the target model.Because a black box attack can be transformed into a white box attack,the attack method is concealed,simple and efficient.At the same time,because the problem of confronting samples is currently part of the black box of the artificial intelligence model,it cannot be effectively solved.Aiming at the problem of adversarial samples,there have been many researches in the field of computer vision,including researching attack and defense methods.However,in the field of malware detection using deep learning,people often only cling to the quality of the classification effect and the effectiveness of the model,while ignoring the credibility of the malware detection system model itself.This will inevitably bring a potential crisis for the application of artificial intelligence technology in the security field.Therefore,to solve the above problems in the malware detection field,this paper proposes a credible protection scheme against adversarial samples based on the bytelevel malware detection model.Through the analysis of the conserved features of the malware input space and the credible defense of neuron invariants in the calculation process,we design the defense method of adversarial samples from two different perspectives.At the same time,this paper implements the above defense scheme based on tensorflow.Experimental results show that this multi-level trusted protection scheme can effectively identify adversarial samples.Compared with similar work,it has low false postivie rate and false negative rate,thus verifying our scheme's effectibeness and efficiency.The main contributions of this article are as follows:1.Aiming at the problem that the existing defense methods of malware adversarial samples are mainly limited to the lineage of the image field,and lack the combination with malware characteristics,we carried out a research on the trusted adversarial samples defense at the malware input level.Then we proposes a trusted defense scheme TCFD based on the conserved features of malware in the input space.Using conserved features that symbolize malicious behavior of malware as a trusted basis to detect adversarial samples.We realize a prototype of the trusted protection scheme.2.Aiming at the problem that current defense methods of malware adversarial samples only focus on input feature level research,and lack analysis of the model data stream,we carried out a research on the trusted adversarial samples defense in the calculation process based on invariant study.Then we proposes a new invariant searching scheme BVI.In the process of running the deep learning model,by using the idea of boundary inspection in traditional security,based on the distribution of neurons as invariants,we realized the process-trusted protection requirements.Compared with the previous invariant defense scheme,our invariant method's overhead is smaller and the false alarm rate is lower.3.Based on the above research we use tensorflow to realize the above protection scheme,using multi-model ensemables.Through three strong malware adversarial sample attacks test set,the above trusted defense scheme can effectively monitor the adversarial samples,while maintaining the normal classification rate.It is also found that the defense has a very low rate of false positives and false negatives.
Keywords/Search Tags:Deep Learning, Adversarial Samples, Trusted Artificial Inte lligence, Malware, Trust Measurement, Invariant
PDF Full Text Request
Related items