| Deep neural networks have been widely used in image classification.However,by adding small,invisible noise to the image,the deep neural network will make the classification of the image wrong,which indicates there is security problem in the image classification task using deep neural network.It is very important to study the adversarial attack and defense to solve the security problem of neural network.Therefore,the thesis study the method of adversarial attack.In order to improve the robustness of the neural network model,the thesis study the detection of the defense against adversarial examples.Research on adversarial attack based on image classification.In most scenarios,the attacker does not know the specific information of the attacked model,so black box attack is closer to real-world applications.Many existing black box attack methods focus on the number of times to query the model while ignoring the visual distortion of the adversarial examples.The thesis proposes the black box attack based on Laplacian matrix,which improves the perceptual quality of adversarial example,while keeping the queries within a reasonable range.Specifically,the thesis constructs the Laplacian matrix from the original image,then uses random search optimization to find the initial perturbation,and finally uses the conjugate gradient method to solve the equation to get the smooth perturbation.The thesis validates the effectiveness of the attack on Image Net and Celeba datasets.The attack method generates adversarial examples with lower distortion,and achieves 100%success rate when attacking Densenet121 and Vgg16 bn classification models.Furthermore,the thesis theoretically proves the convergence of the method.Research on the method of defense against adversarial attack based on image classification.Image classification models based on deep convolutional neural network are vulnerable to adversarial attack,and the effective defense detection algorithm can detect adversarial examples,which improves the robustness of the classification model.According to the difference of decision boundary between the adversarial examples and the original images,the thesis proposes the detection algorithm based on the difference of the decision boundary,and designs a detector with two full-connection layers.And then train detector and detect adversarial examples.The thesis validates the effectiveness of the detect method on datasets Image Net and Celeba.The detection algorithm of the thesis is about 10% higher than Feature Squeezing(FS). |