Font Size: a A A

Research And Application On Adversarial Examples Under Security Of Deep Learning

Posted on:2021-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:Z H ZhangFull Text:PDF
GTID:2428330614463703Subject:Information security
Abstract/Summary:PDF Full Text Request
With the data volume and the level of computer hardware continuously improving,deep learning has achieved breakthroughs in many tasks such as image classification,object detection and manmachine interaction with end-to-end efficient inference methods.Deep learning has been applied to unmanned driving,face recognition and intrusion detection and other applications with high security requirement.Therefore,in-depth research and abundant applications of deep learning have greatly promoted the development of computer vision,natural language processing and other fields.However,although deep learning is applied in many scenarios with high security requirements(Deep Learning for Security),it is still a double-edged sword.The security issues caused by the technology itself need to be taken seriously.Therefore,AI security has become a hot research subject associated with deep learning.The subject of adversarial examples is to figure out how to improve the robustness of the model and the entire system to achieve higher security.At present,the research and the application on adversarial examples under security of deep learning is mainly concentrated in the field of computer vision.The related research can be divided into three aspects: adversarial attack,adversarial defense,and its application.The adversarial attack is performed on target models by generating adversarial examples to make wrong inference.The defense of adversarial examples is supposed to eliminate the negative impact caused by adversarial examples on target models through various technologies of security.The above researches improve the security of deep learning from two opposing aspects,and also contributes to the interpretability of deep learning,the black-box technology.In addition,how to use the characteristics of adversarial examples positively to perform practical applications such as image encryption and blind watermarking is also an emerging research direction.Therefore,three algorithms are proposed from the above research directions of adversarial examples.First,it analyzes the published adversarial attack algorithms,and finds that most algorithms only generate imperceptible and unsemantic perturbations in the laboratory settings.In addition,these perturbations are easily destroyed in practical applications.Aiming at this problem,it proposes an algorithm named Latent Encodings Targeted Transfer(LETT)that performed to generate more precise,reasonable and semantic perturbations in the hidden feature space through the method of black-box attack.Secondly,the detection defense against adversarial examples is an algorithm extracting features and calculating various artificial indicators to perform the detection.Therefore,how to design relevant indicators is one of the aspects of detection algorithm against adversarial examples.Aiming at this idea,it proposes an algorithm named Latent Encodings Anomaly Detection(LEAD)that introduces k-nearest neighbors and self-adaptive weights.It implements high-quality detection by means of distance measurement,and improves the detection speed by using data dimension reduction.Finally,in view of the characteristics of adversarial perturbation,how to implement end-to-end blind watermarking is an interesting work.Therefore,it proposes an algorithm named Latent Encodings Traceless Embedding(LETE),which designs a generative adversarial network with feature concatenation and modified loss functions to achieve the function of embedding fixed-length encodings into images.In addition,the image reconstruction,encodings reconstruction and noise immunity of the LETE model are improved obviously.
Keywords/Search Tags:Adversarial example, deep learning, data manifold, black-box attack, anomaly detection, encoding embedding
PDF Full Text Request
Related items