Font Size: a A A

Research On Intelligent Detection Of Malicious Code Based On Attention Mechanism

Posted on:2024-08-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y HeFull Text:PDF
GTID:2558307112457844Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In the 21 st century,with the rapid development of information technology,while enjoying the convenience brought by information technology,we are also facing an increasingly severe network security situation.Under the complex situation of network security,malicious code has become the main attack mode.With the popularity of Internet applications,the number and types of malicious code and its variants are growing exponentially,and malicious attacks are increasing,which is one of the major threats to the Internet industry.The proliferation of malware poses a serious threat to individuals,enterprises and even key national infrastructure.In particular,the emergence of malware anti tracking technology,deformation technology,communication hiding technology and other survival means has also led to many problems,including difficult detection,low detection accuracy,and strong concealment.Therefore,how to efficiently detect malicious code has become a concern.The paper proposes a detection model based on Attention Mechanism,which takes API call function sequence as the research object and combines neural network model to detect and classify malicious code.In the paper,we use word2 vec to preprocess the API call functions of five major malicious families extracted from Ember,including Ramnit,Ethic,Sality,Emotet and Ursnif.At the same time,we use Attention Mechanism as the main network to build a model to detect and classify malicious code.Self-Attention is introduced in this paper to realize parallel computing,which makes it easier to capture long-distance interdependent features in sentences,and assign different weight coefficients to features in the sequence through training,extract key features.LSTM and CNN are added to screen out some important numerical features,reduce the noise of features and improve the accuracy of model detection.Transformer is an improved model of Self Attention.It adds position coding information to the input sequence,which can increase the ability of the model to capture the sequence.The residual structure is also added around each sub layer of the Encoder,which can solve the problem of difficult training of the multi-layer neural network model,accelerate the convergence of the model,and improve the training speed of the model.In the thesis,the Self-Attention-based LSTM model,Self-Attention-based CNN+LSTM model,Transformer-based LSTM model and Transformer-based CNN+LSTM model are respectively experimented and compared on the same data set.The results show that the four models can achieve the detection and classification of malicious code,and the Transformer-based CNN+LSTM model has a better detection effect,with an accuracy rate of 97%.
Keywords/Search Tags:Malicious code, API call function, Self-Attention, Transformer
PDF Full Text Request
Related items