Font Size: a A A

Attack And Defense Of Adversarial Example In Scene Text Detection

Posted on:2021-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:C M FangFull Text:PDF
GTID:2428330611464981Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
As one of the most important wisdom crystals of human civilization,words are used to describe the objective world,record abstract thought and communicate with others.There are words everywhere in people's daily life,such as road signs on the streets,the outer packaging of commodity,web pages on the Internet,etc.Using computer to recognize the characters in these scenes can promote the electronization of information in people's production and life.With the development of deep learning in recent years,natural scene text detection and recognition have made great progress,which has been able to recognize a variety of words from complex natural scene images.However,researchers found that the model is easy to be interfered by the adversarial example in the image classification model,resulting in the wrong output results.At present,no one pays attention to the security problem of scene text detection model,so it is necessary to use the adversarial example to attack the scene text detection model to test the robustness and anti-interference of the model,the corresponding defense strategy is studied to improve its security in practical application.The main work and contribution of this thesis are as follows:1)Based on the Faster-RCNN detection framework,a natural scene text detection model is trained.With this model,a method based on gradient superposition interference noise is used to generate multiple groups of adversarial example with different amount of interference noise.The image quality of the generated adversarial example and the original image is analyzed and compared.2)In order to study the robustness of the scene text detection model,a white box attack is carried out on the self-trained Faster-RCNN scene text detection model by using the generated adversarial example,and a black box attack is carried out on five classic scene text detection algorithms.The experimental results show that most of the scene text detection models are vulnerable to the interference of the adversarial example resulting in the error detection box,leading to the decrease of detection accuracy.3)For the attack of adversarial example,this paper carries out two defense methods.One is add the adversarial example into the training data,so that the model can learn the interference noise of the adversarial example and reduce the appearance of the error detection box.The second is to treat the interference noise in the adversarial example as ordinary noise,and filter it with filtering algorithm,this paper compare the removal performance of median filtering,mean filtering,Gaussian filtering and bilateral filtering algorithm respectively.
Keywords/Search Tags:scene text detection, adversarial example, attack and defense, deep learning
PDF Full Text Request
Related items