Font Size: a A A

Adversarial Training Based On Virtual Adversarial Examples And Logit Pieces

Posted on:2022-10-17Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q ShaoFull Text:PDF
GTID:2518306734487614Subject:Applied Statistics
Abstract/Summary:PDF Full Text Request
In recent years,as a new method of statistical machine learning,deep learning has made important breakthroughs in various fields,but it is vulnerable to attacks from adversarial examples,which pose a serious threat to its deployment in safety-sensitive fields such as medical treatment,automatic driving and security.Adversarial training is regarded as an effective way to defend against adversarial examples.However,some existing studies show that adversarial training has disadvantages such as low training efficiency and difficulty in balancing robust-clean accuracy,making the robust model unpopular in practical applications.In view of the above problems,the main research work and achievements of this paper are as follows:(1)To improve the computational efficiency,a kind of adversarial training method based on virtual adversarial examples is proposed,called virtual adversarial training.A threshold mechanism is established to select adversarial source examples to generate virtual adversarial examples,while the non-adversarial source examples remain unchanged.Then losses are calculated,and network weights are updated by back propagation.The experimental results on CIFAR-10 and Image Net-30 datasets show that compared with the traditional adversarial training,our method improves about 7%?14% in clean accuracy,basically keeps pace with the defensive effect of the traditional adversarial training in perturbed accuracy,and shortens the training time by about 78% compared with the slowest PGD adversarial training.(2)To improve the trade-off between model robustness and clean accuracy,a piecewise adversarial training based on Logit is proposed,called Logit piecewise adversarial training.Examples are grouped by setting Logit piecewise points,and then according to the perturbation constraint setting principle,appropriate perturbation constraints are designed for the group examples to conduct adversarial training.The experimental results on CIFAR-10 and Image Net-30 datasets show that compared with other adversarial training methods,our method can balance robustness and clean accuracy more flexibly,and the training efficiency is2-4 times that of other methods.The main research content of this paper is the strategy of adversarial training under the background of deep learning,providing an effective reference for the landing of robust models in the real scene.At the same time,our methods have certain enlightening significance,and further research can explore new attack and defense means according to these methods.
Keywords/Search Tags:adversarial example, virtual adversarial training, Logit piecewise adversarial training, efficiency
PDF Full Text Request
Related items