Font Size: a A A

Research On Adversarial Examples Based Machine Learning Security Issues

Posted on:2021-05-08Degree:MasterType:Thesis
Country:ChinaCandidate:C X YuanFull Text:PDF
GTID:2518306479465024Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Recently,machine learning technology has made significant breakthroughs and has been widely used in many fields,which shows excellent performance.However,recent research demonstrates that machine learning models are vulnerable to adversarial examples,which are well-designed inputs by adversaries to cause the machine learning models to make mistakes.Adversarial examples bring serious threats to the security of machine learning systems.In this thesis,we study the adversarial examples in machine learning systems.The work and major contributions of this thesis are as follows.1)We propose a robust and natural physical adversarial examples generation method for object detectors.The proposed method improves the physical robustness of adversarial examples by simulating different physical conditions,and improves the naturalness of adversarial examples by constraining the added perturbations.The physical experimental results show that the generated adversarial examples have a high attack success rate.Compared with other works,the generated adversarial examples by this method are more similar to the original images and more natural.2)We propose a text adversarial example generation method for intelligent question and answer(Q&A)robots,which can provide a fast and automatic test dataset generation method for the robustness evaluation of Q&A robots.The proposed method determines the most important part of a question according to the dependency relation of the question,and slightly modifies this part to generate adversarial examples.The difference between the original question and adversarial examples is small,and humans have no trouble in understanding the generated adversarial examples.To the best of our knowledge,this is the first adversarial examples generation method for Q&A robots.3)We propose an adversarial example based privacy-preserving method against membership inference attacks.In membership inference attacks,attackers infer whether or not a specific data is in the training data of the target model.The proposed method converts the prediction of the target model into the adversarial prediction,which can mislead attackers and prevent membership inference attacks.Adversarial examples used to be an attack method on machine learning systems,but the proposed method uses adversarial examples to protect the privacy of the model's training data.The proposed method can provide effective privacy protection for the machine learning model's training data without affecting the model accuracy.
Keywords/Search Tags:Artificial intelligence security, physical adversarial examples, text adversarial examples, privacy-preserving, membership inference attacks
PDF Full Text Request
Related items