In recent years,as the issue of data privacy and security has become more and more important,not only the user side cares more about their private information,but also countries are more and more concerned about data privacy protection,which further causes the difficulty for enterprises to collect and train data,thus leading to the intensification of the data silo phenomenon.To get out of this dilemma,Google proposed Federation Learning(FL)framework,a distributed machine learning framework,and unlike other distributed learning,Federation Learning is implemented by uploading gradients to update global model parameters without uploading local data.However,federation learning does not yet completely eliminate the problem of privacy leakage,and more and more studies demonstrate that shared explicit gradients can be backed out to users’ local data,as well as some dishonest servers that can return wrong global model parameters to all users,or users and servers conspire to send wrong global model parameters to specific users,causing the local models of these users to fail to converge in time.In addition,federated learning,due to its specificity,is inconsistent across users’ datasets as well as configurations,and these problems lead to the generation of statistical and computational heterogeneity,respectively,which ultimately create serious obstacles to the training accuracy of the global model.To address the above issues,the work done in this paper is as follows.1)A FL-based privacy protection verifiable results for FL(PPVRFL)technique is proposed;the technique uses BLS aggregation signatures to verify the integrity of the useruploaded gradient parameters and the correctness of the server-aggregated This technique uses BLS aggregation signature to verify the integrity of the gradient parameters uploaded by users and the correctness of the gradient parameters aggregated by the server;in addition,this technique can also prevent users from conspiring with the server,and eliminate such threats by verifying whether the gradient parameters received back from the server by any user are the same;while the privacy security of local data is protected by CKKS homomorphic encryption.Finally,this paper analyzes the performance and operational efficiency of PPVRFL through experiments.2)Base on respect to the heterogeneity problem,this paper proposes the automatic update of the weight FL(Fed Auw)optimization algorithm,which uses adaptive learning to allow the server’s computational resources to be fully utilized and use this computing power to compute the optimal weight value for each user.When each user’s weight value is optimal,the impact of both statistical and system heterogeneity on the aggregated global model is reduced.Experimental comparisons with several other algorithms demonstrate that the global model of the Fed Auw algorithm outperforms other algorithms in terms of both convergence speed and accuracy. |