With the continuous development of machine learning and deep learning,the importance of data has been increasingly recognized.Excellent datasets play a crucial role in providing better training for models during the training process.However,due to security concerns,most data owners are unwilling to share their data,leading to the problem of data silos.Federated learning aims to train models on devices with privacy and data decentralization.Unlike traditional centralized machine learning,federated learning allows local training on individual devices without the need to transfer the entire dataset to a central server,effectively addressing the issue of data silos.However,improving model accuracy and accelerating convergence speed are important challenges in the federated learning process.Therefore,this paper proposes two methods to enhance the convergence speed and accuracy in the federated learning process,with the following specific approaches:1.To tackle the catastrophic forgetting problem in incremental learning during the federated learning process,an innovative algorithm is proposed.By constructing an image projector,the input vectors are projected onto a direction orthogonal to the previously learned weights,making them perpendicular to the space of all previous gradients.This ensures that the parameter space corresponding to the previously learned knowledge is not overwritten,allowing the knowledge learned in different training rounds to be stored in different sub-parameter spaces,fundamentally eliminating the risk of old knowledge being influenced by new knowledge.2.Addressing the issue of client contribution measurement in federated learning,an innovative evaluation algorithm is proposed.Sample quality factors and dataset dispersion factors are designed to dynamically adjust the weights of parameters uploaded by different client devices during the model training process.These weights are used to adjust the influence of the parameters uploaded by each client on the global model,ensuring that client data with higher contribution are better utilized,thereby accelerating the convergence speed and improving the accuracy of training.Experimental results demonstrate that both methods effectively enhance the convergence speed and accuracy in the federated learning process. |