| In recent years,with the development of artificial intelligence,deep learning has been widely used in the field of computer security,and security application based on deep learning has become one of the research hotspots.At the same time,countersample attack is a kind of attack mode that threatens the security of deep learning model.Explaining why the adversarial sample can successfully attack the deep learning model can provide help for the adversarial defense of deep learning.Based on the above reasons,this thesis explores the security application of deep learning in the security field and the interpretability of security threats;implements a deep learning-based stream cipher generator,which utilizes the image-to-image processing in the field of deep learning.The domain transfer network transfers any image in the same distribution into another image with a specific style set in advance.This transformation process is the stream cipher generation process.Next,adversarial example attacks are a security threat to the deep learning models,and explaining this black-box process is an important part of improving the security of deep learning models.Therefore,the main contents of this thesis are as follows:(1)A novel deep learning-based security application,stream cipher generator,is designed to implement stream cipher generation process using deep learning in image-to-image conversion.The stream cipher generated by this method has a large key space and high randomness,and is sensitive to the initial value.The proposed method will be one of the first to utilize deep learning to implement a stream cipher generator in a learning method.Furthermore,instead of manually designing and implementing key generators,this work proposes a new research direction,more specifically,the automatic realization of stream cipher generators with higher security levels by learning.(2)Aiming at the deep learning security threat--the explainability of adversarial sample attack,this thesis explains why adversarial sample can attack image classification models.This study explained adversarial example attacks from both qualitative and quantitative perspectives by implementing some complex designed experiments.In addition,the adversarial examples are added to the training set,and the classification model is trained together with the clean examples,that is,adversarial training,and it is observed whether the classification model can correctly classify the adversarial examples.At the same time,directly train a binary classifier of clean examples and adversarial examples,and observe whether the binary classification model can correctly classify the adversarial examples.Understanding what a classification network learns from adversarial examples is an important part of explaining and resisting adversarial example attacks.The strategies of this study can be extended to explain neural networks in other application domains,such as medical image classification or data classification,and can also provide some new insights for resisting adversarial example attacks. |