Font Size: a A A

Research On Adversarial Samples Attack Defense Based On PCA

Posted on:2021-03-02Degree:MasterType:Thesis
Country:ChinaCandidate:M WuFull Text:PDF
GTID:2428330620965181Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
At present,deep learning has become one of the most widely studied and applied technologies in the computer field.However,with the submission of anti-samples,its algorithms,models and training data face many security threats,which in turn affects the security of practical applications based on deep learning.Aiming at the problem of machine learning security and defense adversarial samples attack,a PCA-based anti-sample attack defense method is proposed,which uses the fast gradient sign method(FGSM)non-target attack method.The adversary is a white box attack,and PCA is performed on the MNIST dataset.Techniques to defend against escape attacks in deep neural network models.The experimental results show that PCA can defend adversarial samples attack,and the effect is best when the dimension reduction dimension is 50.With the dimension reduction dimension increasing,the ability to defend adversarial samples attack is getting lower and lower.On the basis of reading and researching a large number of relevant literature and materials,this paper focuses on the following four aspects in the study of defense against sample attacks:(1)The research significance of the background of the subject is described,the current status of research at home and abroad is analyzed,the generation and development of adversarial samples and defensive adversarial sample attacks are introduced,and the significance of defensive adversarial sample attacks to the safety of deep learning is proposed.Attack is an important research direction of machine learning and a problem that needs to be solved urgently.(2)Introduced deep learning-related knowledge,focused on convolutional neural networks,and laid the foundation for subsequent work.The related knowledge of adversarial samples is described,and several classic adversarial sample generation methods are listed.By comparing several methods for generating adversarial samples,since the FGSM method only needs to calculate the gradient once and is not targeted,the generation of adversarial samples is fast and the effect is obvious.Therefore,a fast gradient symbol method is selected to generate adversarial samples.(3)Introduced several common adversarial sample defense methods and analyzed their shortcomings.This paper proposes a new method,namely,using principal component analysis(PCA)to defend against adversarial sample attacks in deep neural networks,The maximum separability and principle-based explanations are explained.It also proposes how to use PCA to defend against sample attacks in this paper.Experiments are performed on the MNIST dataset.The experimental results show that PCA can defend against sample attacks,and the defense effect reaches the best when the PCA dimensionality reduction dimension is reduced to 50 dimensions.(4)Summarize the work done in this paper,put forward the advantages and disadvantages of using PCA to defend against sample attacks,and look forward to thestudy of deep learning security and defense against sample attacks,and future defense against The research directions of the samples were discussed.
Keywords/Search Tags:PCA, adversarial sample, attack, defense, deep learning
PDF Full Text Request
Related items