In today’s world,two kinds of artificial intelligence technologies,deep learning and federated learning(federated learning is essentially a deep learning approach),are increasingly applied to data analysis.However,in the process of data analysis by deep learning and federated learning,the problem of privacy is increasingly serious.The use of differential privacy to protect the privacy of deep learning and federated learning is a main research direction at present.Therefore,it is of great significance to study how to better combine differential privacy with deep learning and federated learning.At present,the research combining differential privacy with deep learning and federated learning mainly has the following problems: First,at present,the definition of differential privacy and the basic mechanism used by the lack of strict and clear explanation of the proof and derivation process,causing difficulties for scholars to get started.Second,in the existing methods of deep learning combined with differential privacy,the model training cycle is limited and the privacy budget is not allocated properly,which leads to poor model security and availability.Third,hierarchical federated learning framework,as a new federated learning framework,can effectively reduce the communication cost of traditional federated learning,but it also have the problems of the balance between the privacy security of participants and the utility of the model.In order to solve these problems,To solve the above problems,this paper conducts in-depth research,and analyzes the noise addition and precision formula in differential privacy.Then,combining with differential privacy technology,it innovatively proposes a deep learning methods based on data feature relevance and adaptive differential privacy and a hierarchical federated learning method based on privacy metric and adaptive differential privacy.The main work of this paper is summarized as follows,among which(2),(3)and(4)are the main exploration and innovative work.(1)The research background,research status at home and abroad and the basic theoretical knowledge needed in this paper are described.Firstly,the research background and significance,as well as the research status of differential privacy in deep learning and federated learning are introduced.Secondly,the theoretical basis of differential privacy technology,federated learning,hierarchical federated learning and information entropy are summarized.Finally,the layer-wise relevance propagation algorithm is analyzed,and the effect of the algorithm is shown through the picture.(2)The laplace mechanism and its accuracy formula,the exponential mechanism and its accuracy formula are analyzed in depth,and the problem of excessive shrinkage of the laplace mechanism precision formula and the exponential mechanism precision formula is pointed out,and the reason of excessive shrinkage is proposed.(3)Aiming at the problems of limited model training cycle and unreasonable privacy budget allocation in existing deep learning combined with differential privacy methods,the model security and availability are poor.A deep learning privacy protection method RADP is proposed based on layer-wise relevance propagation,information entropy and adaptive differential privacy.The method calculates the average feature correlation of training samples according to the layer-wise relevance propagation algorithm.On this basis,combined with information entropy,noise is added adaptively to the feature average correlation.Then,the feature average correlation with noise protection is used to add noise to the feature adaptively.The availability,stability and security of the model are improved by a gradient-independent denoising strategy.Finally,the experimental results on real data sets show that RADP has relatively high availability,stability and security.(4)We aims at the balance between the privacy security of participants and the utility of the model in the hierarchical federated learning framework.A hierarchical federated learning privacy protection method PAd HFL based on privacy metrics and adaptive differential privacy is proposed to solve the shortcomings of privacy protection methods in hierarchical federated learning.Firstly,the privacy measurement method is used to divide the data to be noisy and the data without noise.Secondly,the average correlation of each feature in the local noised data to the output results is calculated on the local pre-training model by using the layer-wise relevance propagation algorithm.Then,an algorithm based on information entropy is used to calculate the privacy measure of the average correlation of each data feature.According to the privacy measure,laplacian noise is added adaptively to the average correlation.On this basis,according to the average correlation of each data feature after noise protection,privacy budget is allocated reasonably,and laplacian noise is added adaptively to the data feature.The method uses privacy metrics and adaptive differential privacy methods to accurately balance the usability and security of the model.Finally,experimental results on real data sets show that PAd HFL has high availability,stability and security.(5)On the basis of the above theoretical research and method design,a federated learning data security system based on differential privacy is designed and implemented,which is composed of two modules: deep learning method based on data feature correlation and adaptive differential privacy and hierarchical federated learning method based on privacy measurement and adaptive differential privacy.In summary,this paper focuses on exploring and innovating the basic mechanisms of differential privacy,differential privacy combined with deep learning,and differential privacy combined with federated learning.This paper provides a new idea of independent innovation for combining differential privacy technology with deep learning and federated learning. |