Font Size: a A A

Research On Privacy Attack Techniques For Neural Network Architecture Search

Posted on:2024-09-19Degree:MasterType:Thesis
Country:ChinaCandidate:Y LiFull Text:PDF
GTID:2568307067972779Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,privacy leakage has become one of the most serious security issues in the field of machine learning,which has attracted widespread attention from both academia and industry.In order to explore the risk of privacy leakage in the field of machine learning and build an effective privacy protection method,researchers have conducted in-depth research on this issue.The research results indicate that machine learning poses various privacy risks in data and models,such as dataset reconstruction attacks,member inference attacks,and model theft attacks.The existence of privacy attacks has caused people to worry about whether their private data is leaked,and has also raised great doubts about the application of machine learning.Therefore,in-depth research on privacy attacks is of great significance and value.Existing research on privacy attacks focuses on the model weight level,but not the model architecture level.Further investigation is needed to determine whether privacy attacks at the level of model architecture could cause privacy risks.This article is based on neural network architecture search and conducts research on privacy risks at the model architecture level.Firstly,a survey and summary of neural network architecture search algorithms and machine learning privacy attack techniques were conducted,and the techniques of dataset reconstruction attacks and member inference attacks were summarized.Then,this thesis takes neural network architecture search as the research background to study how to implement dataset reconstruction attacks and member inference attacks based on architecture level.The research contributions of this thesis are as follows:(1)This thesis proposed a novel architecture dataset reconstruction attack scheme.In the scenario of gradient exchange in federated learning.Attempt to reconstruct the original training set information while only obtaining the architecture gradient information.Specifically,this thesis first design a method to minimize the information similarity between randomly generated fake architecture gradients and real architecture gradients.This method restores the real data information by continuously iteratively optimizing the gradient distance.Then,maximizing the quality of the reconstructed original data with noise reduction techniques.(2)This thesis proposed an effective architecture member inference attack scheme.Deeply explore existing neural network architecture search and training methods,based on the scenarios of online machine learning service platforms.Attackers can infer architecture member data information through the confidence scores output by the target model.Specifically,the method uses the differential distance algorithm to divide the member data and non-member data for the first time,and then performs the differential distance algorithm again on the result of the first division to distinguish the architecture training data and the weight training data.The result of experiment show that the attack accuracy of this method reaches 68% when member inference is performed on MNIST,CIFAR-10 and CIFAR-100.It is proved that the proposed scheme can infer the architectural membership information with a high probability when only the confidence score of the target model can be obtained.
Keywords/Search Tags:Neural Architecture Search, Machine Learning Privacy and Security, Dataset Reconstruction Attack, Membership Inference Attack
PDF Full Text Request
Related items