| The development of computer vision is inseparable from the construction of deep learning models and the participation of a large amount of data,and the data contains sensitive information,ensuring that the sensitive information will not be leaked during the network training process has become a current research hotspot.How to make the data participate in training efficiently while improving the quality of data privacy and getting more usable models is still an urgent problem to be solved.In this paper,we focus on the training method of deep learning model image data based on differential privacy to satisfy the differential privacy during the training process,to improve the model utility as much as possible,and to conduct differential privacy collaborative training and optimization under federated learning.Considering the data interaction and sharing problem,further network training optimization is performed under satisfying differential privacy,and the original data is desensitized to generate the expected amount of data.The data privacy is guaranteed to be secure and controllable,while increasing the diversity of private data sharing.In this paper,we propose a particle swarm optimization algorithm-based training method for differential privacy deep learning models.The particle positions are mapped to the network parameters to find the individual historical optimal and global historical optimal positions.The gradients obtained from the globally optimal particle positions are perturbed and reengaged in model training.Without changing the parameters and gradient structure,the noisy parameters are optimized using the network propagation properties,the objective function and updates are obtained with the perturbed position parameters,the empirical risk function is minimized,the damage of noise on the model utility is reduced,and the method superiority is verified by comparison in multiple training models.In the federated learning mode,the individual and global historical optimal positions in the local environment are updated at the local client using the particle position parameters,and the current optimal parameters are perturbed and then uploaded to the central server.When each client is trained again,the optimized,perturbed and aggregated broadcast parameters are requested from the server to participate in this cooperative learning.The value of massive data lies in sharing.Satisfying differential privacy during image data training of deep learning models and maximizing model availability can prevent the original data from being recovered by attackers,but data sharing is an effective motivation for the continued development of artificial intelligence.To satisfy differential privacy during model training and generate image data that can be used for data enhancement and sharing,a differential privacy vector-valued bilateral alternative algorithm(DPBA)is proposed to inject vector-valued gaussian(VVG)noise involving data structure information into the Wasserstein Generative Adversarial Networks.The training model satisfies differential privacy and generate data with privacy protection.The bilateral alternative noise can perturb the gradients at will,which can guarantee the differential privacy of all gradients and also benefit the parametric optimization.The dynamic noise and vector-valued perturbation approach ensures privacy strength and generates informative feature-rich samples after model training.We use multiple datasets for validation to compare the model usability and a gradient attack to verify the resistance of DPBA algorithm.Finally,we conclude that the training model in this paper can satisfy differential privacy and produce private image data with higher usability,as well as strong resistance to the recovery of attacks under the same privacy level. |