Font Size: a A A

Saliency Detection Based On Generating Adversarial Disturbances And Analysis Of Its Generalization Ability

Posted on:2021-09-24Degree:MasterType:Thesis
Country:ChinaCandidate:Z H WuFull Text:PDF
GTID:2518306032978819Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Humans in the information society must process all kinds of information every moment,and most of this information comes from the visual system.As an important content in the field of computer vision and image processing,image saliency detection refers to the use of a computer to detect the most eye-catching target area in images or video frames for humans.Google Brain research shows that any machine learning classifier can be spoofed,including deep learning algorithms.This article mainly studies how to generate adversarial samples to effectively attack and defend against the existing models.The adversarial sample refers to adding visually imperceptible interference to the image.The input after the interference will cause the neural network model to make a wrong prediction.Google brain research shows that any machine learning classifier may be spoofed.For the saliency detection model based on traditional machine learning,this paper adds Gaussian noise to the original data set and uses the existing detection model to detect these noisy images.The experimental results show that noise does reduce the accuracy of the model.An image preprocessing process can be added to the model to reduce the impact of noise.For the saliency detection model based on deep neural network,this paper uses a gradient-based attack method to generate adversarial samples,and then launch an attack on the existing model.The experimental results show that the adversarial samples generated in this paper successfully attacked the saliency detection model and verified that the adversarial samples are effective for the existing methods.In addition,this paper also proposes a framework to defend against attacks.First,the input image is superpixel segmented,and the pixels in each part are shuffled randomly.It introduces another kind of general noise to destroy the structural patterns in the adversarial samples,thus reducing the attack effect.Then the image is compressed by JPG,and the anti-sample is pulled back to the data subspace.The compressed image is then sent to the target network,and after a context restoration module,the restoration module adjusts the significance score of a location based on the similarity between the original pixel value at that location and its context.Experimental results show that the defense framework proposed in this paper can effectively improve the performance of the saliency detection model.
Keywords/Search Tags:Saliency detection, Adversarial samples, Gradient attack, JPG compression, Context recovery
PDF Full Text Request
Related items