Font Size: a A A

Improving Computational Efficiency Of Two-Party Privacypreserving Neural Networks

Posted on:2022-12-25Degree:MasterType:Thesis
Country:ChinaCandidate:Z Q GeFull Text:PDF
GTID:2518306758991489Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With its ability to provide efficient prediction models,neural networks have made great breakthroughs in the fields of speech recognition and image recognition,and have been widely used in medical,financial and other fields,bringing great convenience to our lives.However,the high accuracy of its model requires the aggregation of a large amount of data,which cannot be provided by a single subject and usually requires data from multiple parties,which has raised public concerns about privacy data.Privacypreserving neural network based on multi-party secure computation is one of the current methods to support multi-party joint model training and prediction under the premise of solving data privacy security.However,the current solution still has the problems of low computational efficiency and lack of precision.In this paper,we propose some basic computational protocol building blocks for privacy-preserving training and inference of 2PC(Two-party Computation)-based neural networks,in which private data is distributed to two non-colluding servers for computation.1)We construct the necessary preprocessing protocol for generating masks in the two-party case using Oblivious Transfer and Garbled Circuit.2)Based on the preprocessing protocol,we support and implement the secret sharing comparison protocol on the two computing parties,and propose a new method to further reduce the number of communication rounds.3)Based on the comparison protocol,we constructed building blocks such as division and exponentiation,realizing the neural network training and inference process that no longer requires mutual conversion between different types of secrets and is entirely based on arithmetic secret sharing.4)To verify the performance of the proposed building blocks in this paper,we conduct experiments on the real dataset MNIST.Compared with previous work,our work achieves higher accuracy than other frameworks,which is very close to the accuracy of plaintext training.At the same time as the accuracy is improved,our work is also improved in time efficiency.For security training,comparing the online stage,we are5 times faster than Secure ML and 4.32-5.75 times faster than Secure NN,which is very close to the current optimal 3PC implementation FALCON.And for safe inference,as far as we know,we should be the current optimal 2PC implementation,which is 4-358 times faster than other work.The new basic building block based on secret sharing under 2PC proposed in this paper realizes the secret sharing comparison algorithm through the preprocessing protocol,and constructs complex calculations such as division and exponent based on the comparison protocol,and realizes the privacy protection neural network completely based on secret sharing.The training and prediction process significantly improves computational efficiency while achieving accuracy closer to plaintext training.Future work will continue to explore to improve the computational efficiency of privacy-preserving joint modeling under 2PC to support practical large-scale training tasks.At the same time,based on the existing building blocks,construct more machine learning algorithms,such as decision tree algorithms,to support richer joint modeling requirements.
Keywords/Search Tags:secret sharing, two-party computation, neural network, privacy training, data security
PDF Full Text Request
Related items