| In commercial applications,in addition to data confidentiality,personalized research is also the focus.Mining various personalized habits of users has great economic value,but the traditional federal study did not consider this demand,when they are faced with new tasks and data in addition to the cost of retraining is big,need a dynamic training to new tasks and data of learning strategies.In this paper,the deficiencies of the existing federal learning are improved:(1)For the personalization of data,this paper introduces a multi-head attention mechanism on the client model to help capture multiple correlations between client features,increase the degree of personalization of client model parameters,and improve the performance of the client model.Aiming at the impact of data differences on the global model caused by the reasonable personalization of local models during aggregation,on the server model,this paper designs and adopts a fusion framework for federated learning with multi-head attention mechanism.The framework firstly uses random sampling of 30% of the data and trains it with the federated average model to obtain the global model on the server.Secondly,for the remaining 70% of the data,on the basis of the federated average model,during aggregation,the distance between the current local model parameters and the preprocessed global model parameters is calculated,and the difference between the models is obtained,which is the local model.The individual weights are formulated,and then the weighted average is obtained to obtain the global model.This approach can maintain the individual characteristics of each model on the basis of mutual learning between models.(2)For the problem of Data-oriented class incremental problem,this paper introduced the incremental learning Icarl training strategy,to help the client to use only a few classes first preliminary training data,can be gradually increased after a class to add to study,classify the strategy according to the sample average rules recently,using priority sample selection based on herding behavior,to refine knowledge and prototype study characterized the rehearsal,dynamically handle the increase in resources without retraining.Aiming at the problem of imbalanced client samples in federated learning,the noise of the client with large sample size has a negative influence on global model training results.In the client model learning,the spatial attention mechanism is designed to help obtain the characteristics of each client’s overall sample,while minimizing the impact of large sample client data noise on the final model training results.Polymerization on the global model,using the average algorithm to extract different from each client sample data preprocessing,provides initial values for the learning process,and get the initial global model,and then the model aggregation to counter the problem of unbalanced characteristics difficult to capture,to join the characteristics of spatial attention mechanism.In this way,the grasping performance of model training for feature key information can be enhanced.This paper presents two federated learning frameworks for data personalization and data dynamic increment.The first framework can improve the proportion of reasonable personalized existence of local models while mining the correlation of features.The second framework can continuously learn new concepts in the data stream,and analyze the effectiveness of the federated learning framework proposed in this paper through experimental results. |