Font Size: a A A

Effect Of Adversarial Examples Technology On The Security Of CAPTCHAs

Posted on:2020-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhangFull Text:PDF
GTID:2428330602951854Subject:Engineering
Abstract/Summary:PDF Full Text Request
As the first barrier to network security,the importance of CAPTCHA is self-evident.There is no doubt that CAPTCHA is usually required high security.Among them,the security is that the CAPTCHA can still accurately distinguish legitimate users in the case of multiple attacks.At present,the rapid development of convolutional neural networks has greatly reduced the security of many CAPTCHAs.Some new CAPTCHAs even choose to sacrifice usability to ensure security.Recent research on adversarial examples seems to bring new opportunities to the field of CAPTCHA.Studies have shown that adversarial examples can successfully fool the most advanced convolutional neural networks by adding noise that is imperceptible to humans to the original image.It just meets the requirements of the security and usability of CAPTCHA.Based on the above considerations,this thesis applies adversarial examples to three commonly used CAPTCHAs to study the application of adversarial examples in the security of CAPTCHA.The main contents include the following three aspects:(1)Firstly,this thesis applies adversarial examples to the selection-based CAPTCHA,uses two kinds of generation algorithms of adversarial examples and three kinds of generation networks that have different scale to generate different samples,and verifies the validity of adversarial examples by verification experiments.At the analysis part of security,this thesis attempts to crack CAPTCHA from the direct classification by pre-training weights,the re-trained network,the open source interfaces and the manual testing of the user-friendly.Then it continues to discuss from the difference in the generated network,impact on the proportion of adversarial examples,influence of combining targeted and untargeted adversarial defense,influence of the mixed data set and the impact of image denosing technology.The results show that adversarial examples can improve the security of the selection-based CAPTCHA and it does not impose any burden on the user in combination with adversarial examples,but the improvement in the security reduces in the face of different attacks.Especially in the face of the current advanced technology of image denosing and the adversarial training,the effect of adversarial examples greatly reduces.In addition,according to the results of the discussion experiments,the thesis also finds that adversarial examples come from the complex generation network are usually smoother,more resistant to adversarial training and the boundary is more difficult to distinguish.On the contrary,the simple generation network is more sensitive to the noise multiple.In terms of the generation algorithm,the single-picture algorithm is more pertinent than the batch algorithm and the ability to resist adversarial training.In addition,this thesis puts forward some tips for applying adversarial examples at the same time.(2)Secondly,this thesis applies adversarial examples to the click-based CAPTCHA,designs a sample generator for automatically generating samples of the labeled CAPTCHA,constructs a simple convolutional neural network for image classification,and uses the re-trained network trained from the adversarial examples to crack CAPTCHA,combines the image denosing technology with adversarial training and user-friendliness test to analyze the security.The results show that adversarial examples indeed improve the CAPTCHA's security only in a small range and the ability is not stable which cannot effectively resist different kinds of attacks.At the same time,this thesis combines the latest literature on adversarial examples of the object detection network,and proposes three ideas for adversarial examples to apply to the click-based CAPTCHA further for reference by the CAPTCHA designer.(3)Finally,this thesis applies adversarial examples to the text-based CAPTCHA,carries out the analysis of security and usability from three aspects,and discusses some other alternative cracking situations.The results show that the security of the method of adding adversarial examples directly into the text-based CAPTCHA is not good,and it cannot effectively resist multiple attacks.In general,this thesis analyzes the application of adversarial examples on the three CAPTCHAs from many aspects at the first time,and the results show that adversarial examples does not reduce the usability of CAPTCHA,but it cannot improve the security stably and significantly.In particular,this thesis has guiding significance for the future research on adversarial examples in the field of CAPTCHA,and has reference value for the designer of CAPTCHA.
Keywords/Search Tags:adversarial examples, CAPTCHA, security, classify
PDF Full Text Request
Related items