Font Size: a A A

Robust Deep Learning For Modulation Recognition Based On Perturbation Generative Model

Posted on:2022-03-15Degree:MasterType:Thesis
Country:ChinaCandidate:L LinFull Text:PDF
GTID:2518306512952189Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Deep Learning(DL)has made major breakthroughs in the fields of computer vision,speech signal processing,and finance.But compared with statistical learning methods,Deep Neural Networks(DNN)in DL has the problem of insufficient robustness,that is,adding carefully designed small perturbation to the input signal,the resulting adversarial examples can lead to the network to output the wrong result of classification.Many scholars have introduced DL into wireless communication system design,which improves performance compared to traditional systems based on statistical model design.However,wireless communication runs in an open environment and is susceptible to adversarial examples and natural environment interference,resulting in a significant decrease in neural network performance.Research on the robustness of deep learning algorithms in wireless communication environments is of great significance for promoting the integration of DL and wireless communication..Based on the deep network model of wireless communication modulation signal recognition,this thesis carries out the performance analysis of the model against the adversarial attack,and proposes the corresponding robust performance enhancement algorithm,which has achieved good performance.This thesis first analyzes the robustness problems of the modulation recognition deep neural network model: on the RML2016.10 a data set,add additive White Gaussian Noise(AWGN)? use norm constraints and Projected Gradient Descent(PGD)methods to generate adversarial samples,which significantly reduces the classification accuracy of the recognition network.Among them,the anti-example perturbation effect generated by PGD is the strongest,and the recognition accuracy of the original data model is reduced by 23%.In the next,using the Conditional Variational Autoencoder(CVAE)model to generate more perturbed adversarial samples,the classifition accuracy of the recognition network will decrease more significantly,compared with the original data model.By 32%.The results show that the modulation recognition deep neural network model has flaws in its robustness under adversarial attacks.Due to the sensitivity of deep learning to adversarial samples,the adversarial samples generated by the CVAE model need to be strictly evaluated.In this thesis,the adversarial samples generated by the CVAE model are evaluated in terms of deterministic properties and probabilistic properties.It is proved through experiments that the generated adversarial samples can be used in robustness research after meeting these properties.In the last,two modulation recognition training methods are proposed: the adversarial examples generated by the PGD algorithm and the CVAE model are added to the training dataset respectively,and the Model-based Robust Training(MRT)algorithm and Model-based Adversarial Training(MAT)algorithm for adversarial training.Both the MRT algorithm and the MAT algorithm are based on the classic adversarial training defense mechanism.Adversarial training is a two-layer optimization problem.The problem of inner maximization loss is solved by generating adversarial samples.The outer minimization problem is to find a model parameter that minimizes the adversarial loss.The difference between the two algorithms and adversarial training is that they depend on the existence of the model,which is a mapping describing how the input data x perturbed to x'.Moreover,while MRT algorithm and MAT algorithm solve the internal maximization problem,they both involve expanding the original data set with the disturbance parameters that maximize the loss.The experimental results show that the MRT algorithm and the MAT algorithm can effectively improve the robustness of the network model in an adversarial environment,and the two algorithms can fit the CVAE model better.
Keywords/Search Tags:deep learning, adversarial examples, robustness, conditional variational autoencoder, model
PDF Full Text Request
Related items