Font Size: a A A

A Research On Decentralized Evolutionary Security For Machine Learning Models

Posted on:2023-03-27Degree:MasterType:Thesis
Country:ChinaCandidate:C J TangFull Text:PDF
GTID:2568306836464104Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
As the demand of machine learning increases,decentral systems can help the people to build and use machine learning models while bypassing corporate oligarchy and algorithmic discrimination.Adversarial attacks are currently the main threat to machine learning tasks and can be categorized as evasion attacks,poisoning attacks,model extraction attacks,and inference attacks.Although defensive methods for these attacks in centralized scenarios have been extensively researched,it is difficult to apply them in untrusted,decentralized scenarios.Therefore,how to secure this process needs to be explored.This thesis focuses on the security in decentralized evolving tasks of machine learning models.The main defence method for escape attacks is adversarial training.Traditional adversarial training methods require gradient information about the model,and in the scenario of decentralized evolution of machine learning models,model owners do not want untrusted participants to access the model.And the participants who master the adversarial training method likewise do not want their methods to be disclosed directly.Based on the above contradictions,this thesis presents adversarial training based on procedural noise adversarial examples.The method does not require access to the model and perturbs the dataset for adversarial training,and finally uses the perturbed dataset as the delivery.Thus,the mutually distrustful participants in the decentralized evolution can complete the evolution of the model’s defence capability against escape attacks.This thesis further investigates the impact of poisoning attacks on machine learning models,combining the results of previous studies to conclude that poisoning attacks are difficult to defend against.There is therefore a need to trace the origin of the model back after the attack has occurred.To address this problem this thesis designs a commitment scheme for linear regression models and a commitment scheme for neural network model files.The linear regression commitment scheme also provides the ability to test the model predictions without accessing the model.This thesis successfully constructs a pair of neural network model files with hash collisions,and the neural network model file cryptographic commitment proposed in this thesis provides a defence against hash collision attacks.This thesis further investigates the impact of poisoning attacks on machine learning models,and the experimental data in this paper combined with previous research can conclude that there is currently a lack of effective defence methods against poisoning attacks.Therefore there is a need to trace the source of the model after the attack has occurred,and to address this problem this designs a commitment scheme for linear regression models and a commitment scheme for neural network model files.The commitment scheme for linear regression also provides the ability to verify the model predictions without accessing the model.The commitment of the model needs to be non-repudiation,weak collision schemes currently exist for some of the hashing algorithms,and this paper successfully constructs a pair of neural network model files with MD5 collisions,so it is not secure to use the commitment of machine learning models with the presence of weak collisions.Based on this,this thesis proposes a model file commitment scheme based on key-hash message authentication codes,which is not only applicable to the weak centralization scenario but also provides security for the distribution of models embedded in machine learning frameworks even if there is a collision attack on the hash.
Keywords/Search Tags:machine learning, adversarial training, neural networks, commitment scheme
PDF Full Text Request
Related items