Font Size: a A A

Research On Adversarial Attack Method For Communication Signal Modulation Recognition

Posted on:2024-08-03Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ZhaoFull Text:PDF
GTID:2568306941984039Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
The deep neural network has unique advantages such as automatic feature extraction,independent analysis,and high recognition accuracy,making the integration of artificial intelligence technology into modulation recognition a research hotspot.However,relevant studies have shown that deep neural networks are vulnerable to adversarial examples,and adversarial examples can easily deceive many artificial intelligence algorithms that perform well under normal circumstances,seriously threatening the security and availability of artificial intelligence systems.At present,the exploration of adversarial attack methods for communication signal modulation recognition has just started.Existing research has mainly focused on confirming the existence of adversarial attack problems in the field of communication signal modulation recognition and comparing the effectiveness of existing classic methods for attacking modulation recognition models.How to improve the transferability and concealment of signal adversarial examples remains to be studied.In view of the above problems,this paper studies and improves the adversarial attack method from two aspects of the transferability and concealment of adversarial examples.The main innovative achievements of the paper are as follows:(1)An adversarial attack method based on time shift transformation is proposed.Aiming at the problem of weak transferability of signal adversarial examples,this paper proposes an adversarial attack method based on time-shift transformation based on the classic adversarial attack method and the idea of data enhancement.This method introduces signal time-shift transformation into the adversarial attack,by randomly time-shifting the input signal,the "overfitting" in the generation process of the adversarial samples can be effectively reduced,and the transferability of the adversarial samples can be improved.The experimental results on the dataset RML2016.10a show that compared with the momentum-based iterative fast gradient sign method,while maintaining white-box attack performance and concealment,the method increases the black-box attack success rate of ResNet and VT-CNN2 models by 9.9%and 11.1%,respectively.(2)An adversarial attack method based on partial derivative selecting is proposed.Aiming at the problem of poor concealment of signal adversarial examples,this paper proposes an adversarial attack method based on partial derivative selecting.In each iteration,the method sorts the partial derivative values of the signal sampling points by the loss function,and then selects some sampling points with larger absolute values of partial derivatives for normal disturbance update,while the partial derivative values of the remaining sampling points are set to zero.This can reduce the signal distortion caused by the disturbance change of most sampling points that have little influence on the loss function,thus effectively improving the concealment of the adversarial samples.The experimental results on the dataset RML2016.10a show that compared with the Nesterov momentum-based iterative fast gradient sign method,the difference measure FD between the adversarial examples generated by this method and the original examples can be reduced by more than 10%while maintaining the attack performance.
Keywords/Search Tags:deep learning, modulation recognition, adversarial attack, transferability, concealment
PDF Full Text Request
Related items