Font Size: a A A

Research On Image Adversarial Examples Attack And Application Based On Deep Learning

Posted on:2022-10-25Degree:MasterType:Thesis
Country:ChinaCandidate:T G LiFull Text:PDF
GTID:2518306476990649Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Deep learning algorithm has achieved great success in the field of computer vision,but some studies have pointed out that the deep learning model is vulnerable to the attack of adversarial examples,so as to make wrong decisions.This challenges further development of deep learning,and also urges researchers to pay more attention to the relationship between adversarial attack and deep learning security.At present,the mainstream attack algorithms have many forms,which can be targeted according to the deep learning model in different situations.The main research directions include improving the attack success rate,reducing the resource cost of examples generation and black box attack close to the real attack scene.In this thesis,we focus on the study of adversarial examples.The process of generating examples is taken as the retraining of the model,and the disturbance added in the examples is taken as the only parameter to be optimized in the retraining process.From the perspective of gradient optimization,we propose an attack algorithm which can improve attack success rate under the premise of controlling the time cost.Nesterov momentum is introduced into the algorithm,which can accelerate the convergence and improve the direction of gradient update in the optimization algorithm,and accelerate the generation of examples.In order to obtain better attack effect,the training method of iterative thermal update is used,and the projection gradient algorithm is used to limit the range of the generated disturbance in each time.In this thesis,two public datasets are selected for experiments,and different deep learning models are pretrained for each dataset as the attacked model.The experimental results show that the proposed algorithm can improve the success rate of attack depth learning model,and will not increase the time overhead.In addition,based on the research of attack algorithm,this thesis proposes a set of efficient data preprocessing scheme to defend against attack from the robustness of adversarial examples,which provides ideas for the research of defense algorithm.Finally,based on the research results of this thesis and the analysis of future development,a lightweight attack and defense framework against examples is constructed.It is pointed out that the construction of attack and defense framework against examples is of great significance to the research of deep learning model security.
Keywords/Search Tags:Neural network, Adversarial examples, Deep learning safety, Model defense, robustness
PDF Full Text Request
Related items