Font Size: a A A

Research On The Technology Of Adversarial Attack For Signal Recognition

Posted on:2022-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:H J ZhaoFull Text:PDF
GTID:2518306353979129Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of modern communication network,mobile Internet,satellite navigation network and other technologies,the technical level and business scale of the wireless communication field have achieved dramatic growth.As the deep neural networks have unique advantages in terms of autonomous analysis,automatic feature extraction and nonlinear fitting over other methods,which provides intelligent means for intelligent identification,evaluation and decision making in communication.However,recent studies show that deep neural networks are terribly vulnerable to the attack of adversarial examples.By adding small perturbation to the signal waveform,adversarial examples in the signal domain would be constructed,leading to the errors in prediction or recognition of the artificial intelligence models,which causes damage to the modern wireless communication systems,cognitive wireless networks,electromagnetic reconnaissance and electromagnetic spectrum warfare applications.At present,there are three key problems in the research of adversarial attacks for signal recognition: Firstly,the existing researches are mainly limited to other fields such as image,speech and text,and the exploration of adversarial attack technology in communication has just started.Secondly,the current researches extensively focus on improving the attack performance,but ignoring whether the attack traces are easily captured by machine vision or human perception.Thirdly,the current researches concentrate on traditional misclassification attacks,while the technologies to induce the model to output specific results are actually in urgent need of development.Considering these three scientific issues,this paper studies adversarial attack techniques for signal recognition.The main works and contributions are as follows:First of all,the paper is based on the classic adversarial example generation algorithms,which verifies the effectiveness of the adversarial attack in signal recognition.Representative adversarial attack methods have been used,combined with the ability of cross-model migration of countermeasures,to launch an attack on the signal datasets,and examines the key factors of different attack types,signal-to-noise ratios,and iteration steps.At the same time,adversarial attack algorithms capable of highly adapting to signal recognition models are found.The experimental results show that adversarial examples will seriously damage the recognition accuracy of the intelligent models,which proves the generalization ability of adversarial examples across models.Besides,the dissertation proposes a new evaluation index of similarity difference and an iterative adversarial attack algorithm based on Newton's momentum to address the problem of perceptual concealment caused by adversarial attacks in signal recognition.This indicator uses the characteristics of the in-phase and quadrature components of the signal waveforms,combined with mathematical modeling,to quantify and analyze the fit of the signal waveform before and after the perturbation,so as to measure the perceived concealment of different adversarial attack algorithms.The experimental results show that the proposed attack algorithm can not only reduce the waveform perception deviation caused by adversarial perturbation,but also carry out a high level of effective adversarial attack.Finally,this paper puts forward the logits combination evaluation index for the challenge of target-induced verification and evaluation of adversarial attacks in signal recognition.This indicator can give the logit difference change curve predicted by the models from the perspective of the source class and the target class,enriching the traditional evaluation methods based on confusion matrix and prediction confidence.The experimental results show that the target class logits difference can effectively analyze the vulnerability and anti-perturbation ability of different signal examples,and evaluate the target induction performance of adversarial examples in a fine-grained manner in the meanwhile.
Keywords/Search Tags:Deep neural network, signal recognition, adversarial example, invisibility, targeted attack
PDF Full Text Request
Related items