Federated learning,as an effective method to solve the problem of data islands,realizes model training without collecting data,which requires the server to aggregate the local gradient.However,the current solutions all assume that the server honestly computes the global gradient.In reality,the server will not correctly calculate the global gradient for its own benefit.To solve this problem,a rational and verifiable federated learning framework is proposed to achieve the integrity verification of the global gradient.What’s more,as a mechanism of machine learning,federated learning has many iterations,long training time and low efficiency in the model training process.To solve this problem,a federated learning optimization algorithm based on incentive mechanism is proposed to improve the efficiency of model training.The main research work of this paper is as follows:(1)Rational and verifiable federated learning framework.The inertia and selfishness of the server result in the server not calculating global gradients correctly,and existing schemes based cryptographic algorithms have excessive verification overhead.To solve these problems,this paper proposes a rational and verifiable federated learning framework.Firstly,we combine game theory to design prisoner contract and betrayal contract to force the server to be honest.Secondly,the scheme uses a verification scheme based on replication to verify the integrity of the global gradient,and supports offline client.Finally,the analysis proves the correctness of the scheme,and the experiment shows that compared with the existing verification algorithms,this scheme reduces the verification overhead of the client to zero,the number of communication rounds in an iteration is optimized from the original three rounds to two rounds,and the training cost is inversely proportional to the offline rate of the client.(2)Federated learning optimization algorithm based on incentive mechanism.Aiming at the problems of many iterations and long training time in the federated learning training process,this paper proposes a federated learning optimization algorithm based on incentive mechanism.Firstly,we design a reputation value related to time and model loss and based on this reputation value to design an incentive mechanism to encourage clients with high-quality data to join the training.Secondly,the auction mechanism is designed based on the auction theory.These clients auction the local training tasks to the fog node and entrust the high-performance fog node to train the local data to improve the local training efficiency and solve the problem of uneven performance between clients.Then,a global gradient aggregation strategy is designed to increase the weight of high-precision local gradient in the global gradient,and eliminate malicious clients,thus reducing the number of model training.Finally,the experiment shows that compared with the existing algorithms,the training rounds of the model are reduced,and the total training time is reduced by more than 27%. |