Font Size: a A A

Research On Cloud-edge Joint Task Inference And Model Collaborative Training In Edge Intelligence

Posted on:2022-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:S H XuFull Text:PDF
GTID:2518306563475624Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
With the advent of the Internet of Things era,edge computing is gradually becoming a new computing paradigm in the field of Io T.With the rapid development of AI,edge intelligence will be the trend of the future.Compared with traditional cloud computing model,the edge computing model with scattered and limited resources has brought great challenges to the training and inference,model deployment,and resource allocation of AI services.The paper focuses on task inference and model training scenarios in edge intelligence,and proposes corresponding optimization algorithms.Firstly,the task inference of DNN model in edge intelligence has been studied.Since the popular DNN model is usually very large,it is difficult for edge nodes to meet its storage and computing resource requirements.We propose a DNN-based multitask cloudedge joint inference algorithm DMCJIA,which realizes cloud-edge joint task inference through the DNN model splitting technology,and models the optimization problem of minimum average task delay in multitask scenarios.We decompose the Np-hard optimization problem of nonlinear mixed integer programming into two sub-problems of model joint offloading and cloud resource allocation,and design a two-layer optimization algorithm of binary genetic algorithm and augmented Lagrangian algorithm to approximate optimal solution.The experimental results show that the average task delay cost of the DMCJIA algorithm is reduced by about 20%-30% compared with other task inference algorithms in edge intelligence,and it is also very robust to changes in task environment,which means it can meet the real-time dynamic task inference requirements in edge intelligent.Secondly,the model training task of federated learning in edge intelligence has been studied.Because the traditional federated learning algorithm is based on central server,which has poor node resource utilization and is prone to congestion in the core network,we implement the distributed decentralization model collaborative training based on the gossip algorithm,and analyze the convergence of the algorithm.After that,we use the simulated annealing algorithm to improve the model push probability in the gossip algorithm,and propose an adaptive gossip-based edge federated learning algorithm AGEFL.The experimental results show that,the AGEFL algorithm is 20%-60% faster than the traditional federated learning algorithm without increasing the communication cost,and has better convergence performance than the classic gossip training algorithm.The paper specifically studies the key issues of multitask cloud-edge joint inference and decentralized model collaborative training,improves the execution efficiency of task inference and model training,and achieves good results in the distributed environment of edge intelligence.At the same time,the solution stability of task inference and the data heterogeneity of model training can be improved,which needs further research in the future.
Keywords/Search Tags:edge intelligence, task inference, federated learning, distributed deep learning
PDF Full Text Request
Related items