Font Size: a A A

Privacy-preserving Machine Learning Framework Based On Secure Two-party Computation

Posted on:2022-11-05Degree:MasterType:Thesis
Country:ChinaCandidate:Z P ZhouFull Text:PDF
GTID:2518306761959669Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Recently,machine learning has been widely used in scientific research and various industrial production,which effectively helps enterprises or institutions to make decisions and management,and promote social progress.With the development of big data,large-scale data is collected for machine learning model training and inference,such that the performance of machine learning models has been significantly improved.However,the massive data required by the machine learning models may contain the users' private data,which has raised the government's and users' concerns about privacy.Therefore,how to protect users' data in the process of applying machine learning has become a current research hotspot.Such hotspot is called Privacy-Preserving Machine Learning(PPML).Many privacy-preserving machine learning frameworks have been proposed,which can complete the training and inference task of machine learning models while protecting users' privacy data.However,we evaluated the frameworks and found that the accuracy was reduced due to computation error and the execution time was unbearable: Compared to executing tasks in the plaintext,the accuracy of the model dropped significantly,and executing the framework took serious hours or days.To mitigate the two problems existing in the existing privacy-preserving machine learning frameworks,we design the relevant protocols,adopt the relevant technologies in the secure two-party computation,and build our framework,which consists of the secure inference framework and secure training framework.Our main contributions are as follows:1.We design and implement a privacy-preserving machine learning framework for secure inference tasks.In this framework,in order to solve the problem of accuracy,we propose a new truncation protocol,which avoids the drop in the accuracy of the model due to truncation errors,and the protocol can evaluate the Re LU function incidentally.Meanwhile,for the complex functions that are expensive in secure multi-party computation,we construct a more accurate approximation function to mitigate the impact on the accuracy of the model;In order to improve the computational efficiency,we convert the convolution operation into matrix computation and optimize the matrix multiplication protocol based on oblivious transfer.2.We propose a privacy-preserving machine learning framework for secure training tasks.In this framework,in order to reach higher accuracy,we construct an approximation function according to the original function;To improve the online phase,we propose associative multiplication triples to accelerate the computation of fully connected layers.Meanwhile,we optimize the existing offline phase protocol and design a new multiplication triplet generation protocol,which can effectively reduce the communication and computation,and accelerate the execution of the offline phase.3.We vectorize all protocols and accelerate matrix computation in the protocol by adding CUDA support.All the protocols cooperate with each other,and our privacypreserving machine learning framework is built from them.We adopt the layered architecture style to improve the scalability of the framework.Finally,our framework can execute the secure inference and secure training tasks over various machine learning models.
Keywords/Search Tags:secure multi-party computation, Privacy-Preserving Machine Learning, garbled circuits, oblivious transfer, secret sharing
PDF Full Text Request
Related items