Font Size: a A A

Research On Differential Privacy Protection Method Of Feedforward Design Convolutional Neural Network

Posted on:2022-08-09Degree:MasterType:Thesis
Country:ChinaCandidate:D LiFull Text:PDF
GTID:2518306485985899Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,convolutional neural networks(CNN),relying on a large number of real data sets and efficient optimization algorithms,have been proven to achieve the most advanced performance in many fields.However,CNN's development has been hampered by the high complexity required for its training and the unexplained nature of the black box.At present,many scholars at home and abroad study the interpretability of Convolutional Neural Network from different directions,and feedforward-designed Convolutional Neural Network(FF-CNN)is a unique one among many research results.The network is based on the principle of statistics to determine the parameters of the network don't need the traditional BP algorithm in network training and SGD optimizer,under the same network structure,FF-CNN compared to the traditional adopts BP algorithm training the CNN possesses the advantages of low training complexity and model can explain,and FF-CNN in image classification,the classification of the point cloud,face recognition,and is widely applied direction of medical diagnosis,etc.However,when CNN's training data set contains private information of individuals and the parameters of the model are shared with other users,the privacy of the data provider will be easily leaked.Therefore,the privacy protection of Convolutional Neural Network has become a research hotspot in recent years.Differential privacy model is a privacy protection technology proved by strict mathematical theory,which has been widely used in machine learning and deep learning.The existing schemes for embedding differential privacy into deep learning are mainly divided into two types:(1)designing the corresponding noise-adding scheme in the gradient value of network training to protect privacy;(2)Noise is added to the expansion of the objective function of the network for privacy protection.The existing deep learning privacy protection scheme is not applicable to FFCNN because the gradient value will not be generated in FF-CNN network training,and the loss function will not be applied to optimize network parameters.Therefore,in order to protect the privacy security of FF-CNN and promote its application in more practical scenarios,this paper carries out research from the following three aspects:(1)We analysis in the process of FF-CNN training,by multi-stage Saab transformation,multiphase Channel-wise Saab transform convolution and multi-stage LSRs to build the connection layer,privacy problems and privacy against experiments on three data sets,with PSNR as the image quality evaluation index to measure the effect of privacy,experiments show that when the FF-CNN shared with other users,CNN's model can reveal private information of data provider.(2)The differential privacy forward design convolutional neural network algorithm(SFFCNN)is proposed to solve the problem of privacy leakage in multi-stage Saab transform.This algorithm protects the privacy of Saab transform through the privacy budget scheme with the proportion of eigenvalues.Specifically,the convolution kernel with large eigenvalues allocates a large privacy budget,while the convolution kernel with small eigenvalues allocates a small privacy budget.In this way,the privacy security is guaranteed and the utility of the model is not greatly affected.In addition,in order to reduce the risk of model overfitting caused by noise addition and improve the robustness of the model,Jensen-Shannon divergence and label smoothing technology are proposed to filter the input features of the full connection layer in the feature decision stage of the model.Finally,according to the post-processing properties of differential privacy,it is proved that the SFF-CNN algorithm satisfies the definition of differential privacy,and the experimental results show that the proposed SFF-CNN algorithm has good privacy,utility and robustness.(3)An interpretable convolutional neural network algorithm with privacy protection(SFLWCNN)is proposed.In view of the privacy problem of Channel-wise Saab transform and the characteristic that the multi-stage Channel-wise Saab transform will discard the output value of high frequency response(the output value with small eigenvalue),the privacy protection scheme of noise truncation is designed to protect the privacy of the data provider.At the same time,adopt BP algorithm to train full connection layer and use the LRP algorithm on the input characteristics of the connection layer correlation calculation with the model output size,according to the calculated results of the input characteristics of the connection layer decomposition and choice,to reduce the model number and improve the performance of the purpose,in addition,the application of visualization technology shows the connection layer the correlation of input output characteristics and the model size,to strengthen the connection layer features the interpretability of the decision making process.Finally,a large number of experiments are carried out on five data sets to prove that the proposed SFLW-CNN algorithm can achieve a balance among utility,privacy and interpretability.
Keywords/Search Tags:Feedforward Design Convolutional Neural Network, Differential Privacy, Feature Selection, Interpretability, Privacy Protection
PDF Full Text Request
Related items