Font Size: a A A

Generation And Application Of Image Adversarial Examples For Neural Networks

Posted on:2020-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y J LiuFull Text:PDF
GTID:2428330572487278Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Deep neural networks have achieved great success in many tasks in the field of machine learning,such as image classification,speech recognition,natural language processing,medical health,etc.However,some researchers have recently proved that deep neural networks are vulnerable to adversarial examples.Adversarial examples are input samples formed by deliberately adding subtle perturbations in the data set,that can cause the model to give incorrect outputs with high confidence.In the field of image research,such attacks severely hamper the deployment of neural network systems in security-critical applications,such as autonomous vehicles,face recognition,surveil-lance systems,etc.The existence of adversarial examples poses a great threat to the security of artificial intelligence.These threats may lead to confusion in the identifica-tion system driven by artificial intelligence,resulting in misjudgment and even system collapse or hijacking.But from another perspective,the existence of adversarial exam-ples can also stimulate more research on how to defend against such attacks and obtain more robust and reliable neural networks.In recent years,research in this field by domestic and foreign research teams can be divided into two aspects:attacks and defenses against neural networks.The attacks are to study algorithms about how to construct or generate adversarial examples to deceive neural networks.The defenses are to take some means to reconstruct the adversarial examples to the input data that can be correctly identified or to train a more robust neu-ral network that is not misled by adversarial examples,in order to ensure the security of the artificial intelligence system.Research on algorithms of generating adversarial examples is just as important as the research on defenses.The exploration to algorithms of attack can not only promote the generation of more effective defense algorithms,but also researchers can pay attention to and explore the positive application of the adver-sarial examples.Attackers should still pay attention to and exploit other vulnerabilities other than the vulnerability of the neural network itself.They can also find other points that can be attacked,and design adversarial examples from multi-angles,thus promoting defenders' repair of neural networks and improving the robustness of systems.In addi-tion,attackers should explore how to attack in harsh environments,such as black-box attacks,where attackers do not know the internal structural parameters of the network,and attacks with minimal information.In this paper,four aspects of image adversarial examples are studied.The main contributions are as follows:1.The algorithm of generating adversarial examples for the defensive distilla-tion networks:Defensive distillation is a strong and effective defense method against adversarial attacks.This paper proposes an algorithm of generating ad-versarial examples for defensive distillation networks,?-neighborhoodattack.By skillfully designing the objective function,we add an attacker adjustable param-eter to limit the perturbation upper limit of each pixel in the image.It not only improves the controllability of the algorithm,but also ensures that the generated adversarial examples do not have bright spots that have great damage to the vi-sual quality of the image.Our experiments show that this method can also greatly improve the speed of computing against adversarial examples.2.The algorithm of generating adversarial examples for image resizing:Image resizing is a common operation in the deep learning pipeline to adjust the input image to the size required by the model.This paper proposes a new method of generating adversarial examples,embedding attack,which is to fool a classifier by attacking the image resizing operation,instead of exploiting the weaknesses of the neural network itself as in all previous work.By the embedding attack,we can embed a small target image into a large original image to generate an adver-sarial example without querying the target network model.When the adversarial example is adjusted to the specified input size,it will fully revert to the embed-ded target image.In this paper,the embedding attack algorithm is designed for three common image resizing methods.In order to improve the practicability of the attack,a universal embedding attack that can be applied to different image resizing methods is designed.In addition,in order to improve the visual quality of the generated adversarial examples,this paper adds image pre-selection and color transfer before the embedding attack,thus forming a complete framework of constructing adversarial examples.3.The algorithm of decision-based adversarial examples:Deep neural networks can be attacked in the black-box setting.By repeatedly querying the model,an attacker can rely on the final decision tag returned by the model each time to attack without relying on any probability information.This kind of decision-based attack is the most challenging of the black-box attacks.This paper proposes a new decision-based attack algorithm,qFool,which can generate adversarial examples with only a small number of queries to the target model.Compared with previous attacks,qFool can greatly reduce the number of queries to the model while achieving the same visual quality of adversarial examples.In addition,this paper further improves the algorithm by constraining the perturbation to the low-frequency subspaces,and successfully attacks a commercial image recognition system,which proves the effectiveness of qFool in real-world scenarios.4.The algorithm of image content protection based on adversarial examples:Online image sharing in social platforms can lead to undesired privacy disclosure.For example,some enterprises may use deep neural networks to detect these up-loaded images and analyze the preferences of users for commercial pur:poses.In order to avoid such neural-network-based detectors without affecting the visual quality of human eyes,this paper proposes an algorithm of generating adversarial examples,stealth algorithm,which makes it impossible for the automatic detec-tors to determine the positions of the objects in an image to protect its content.Experiments show that,compared with other image content protection methods,the stealth algorithm has higher success rate and better visual quality.Also,a user-adjustable parameter cloak thickness in the algorithm can be used to adjust the image perturbation and improve the controllability of the algorithm.In ad-dition,this paper finds that the adversarial examples generated by this algorithm are transferable,that is,the adversarial examples generated for a specific network model will also affect other models.
Keywords/Search Tags:neural networks, image adversarial examples, adversarial attacks, image content protection
PDF Full Text Request
Related items