Font Size: a A A

Research On Privacy-Preserving In Distributed Collaborative Neural Network

Posted on:2020-04-02Degree:MasterType:Thesis
Country:ChinaCandidate:X JiaFull Text:PDF
GTID:2428330602452011Subject:Information security
Abstract/Summary:PDF Full Text Request
With the development of science and the progress of technology,the data that people generate and store is constantly increasing,people should consider how to use this data to generate value.Machine learning and neural network can learn and discover the patterns from big data,which have gained remarkable achievements in the tasks of image recognition,speech recognition,text recognition and so on.After continuous development,the layers of neural networks is increasing,and has changed from shallow neural networks to deep neural networks.At the same time,a variety of network models are proposed to solve different problems.As neural networks become more complex,the computational power they required is increasing,it has become impossible to make large-scale neural networks on a single powerful machine,so the parallel neural network has proposed to solve this problem.Using parallelization technology,large-scale training tasks can be deployed into distributed computer systems,thus speeding up the training of neural networks.At the same time,the training of neural network requires mass data,some larger companies can collect thest data and train the neural networks,to develop their business.While those smaller companies because of the small amount of data,they can collaborate with others to develop their business,this method is called distributed collaborative neural network.This paper researches how to make the accuracy of the model increase continuously and protect their dataset in distributed collaborative neural network at the same time.First of all,this paper assumes a “trust domain” scenario in which users can share a portion of their datasets for training,while in the trust domain,users cannot disclose their privacy,this scenario is suited with real world.Within the trust domain,this paper proposes a method which uses data and parameters in the same time,which is more accurate than using only parameters.Between the trust domains,this paper only used parameters,and proves that the parameters in the domain server have been confused,so the user's data will not be detected.In addition,this paper finds the situation that participants don't' have their own datasets and can get high accuracy in each training,we call them “dishonest participants”.This paper analyzes how dishonest participants hide themselves during each training process,thus resulting a lower accuracy of the entire training.Aiming at the dishonest participants,this paper presents an early multi-wheel detection mechanism,which is composed of a training method and several detection methods.The training method is called “declined optimal value”.This method uses of the characteristics of neural network that can improvement high in early training stage,which can make a lower starting value and a higher increase accuracy.The detection method can prevent dishonest users from uploading incorrect detection results.
Keywords/Search Tags:Distributed Neural Network, Collaborative Learning, Privacy-Preserving, Stochastic Gradient Descent
PDF Full Text Request
Related items