Font Size: a A A

A Study Of Distributed Model Training And Computing Offloading At Network Edge

Posted on:2022-01-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z Y MengFull Text:PDF
GTID:1488306323982429Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
With the development of mobile computing and Internet of Things(IoT),it is a new trend that the data source shifts from the cloud datacenters to the increasingly widespread devices(e.g.,smartphones,surveillance cameras and wearable devices)at the network edge.Efficient utilization of sensing data generated at the network edge through distributed model training and intelligent knowledge inference will be an im-portant way to fuel the continuous booming of AI applications and further promoting the smart connection of everything.Traditionally,sending massive amounts of sens-ing data to the cloud platform for training will result in high bandwidth consumption and possible privacy leakage,which is not suitable for various applications with band-width constraint and high privacy requirement.Besides,high latency will make user experience worse,while the massive amount of private data on the cloud may raise the concerns on privacy.Whereas,by emerging network,computing,storage and other functions,edge computing provides model training services within nearby range and empowers the IoTs.However,using edge computing to process massive amounts sensing data is not easy either.Specifically,on one hand,intelligent tasks become more and more com-plex,at the same time the complexity and scale of models increasing day by day.On the other hand,unlike traditional model training performed in a stable system with abundant resources in a cloud,there are three major scientific problems of resource constraints,environment dynamics,and data imbalance at the network edge.Those problems bring big challenges to model training and intelligent knowledge inference at the network edge.In this dissertation,we comprehensively analyze the require-ments for model training and deploying intelligent tasks,and combine the three major scientific problems at the network edge.We focus on the goal of using edge-side com-puting power for distributed model training and intelligent task offloading to achieve edge intelligence.The main research can be concluded as follows.Propose a model training method by data dispatching Considering the re-source constraints(e.g.,processing power of nodes,access bandwidth resources)and the imbalance data on edge nodes,we propose a data dispatching-driven model train-ing algorithm using Lyapunov optimization techniques in this dissertation.Specifi-cally,this dissertation using dynamic data dispatching between edge nodes at each time slot so that each model training epoch requires only a limited number of edge nodes to upload local parameters and participate in the global model update.Through theoretical analysis,we demonstrate that the proposed approach achieves near-optimal performance in the absence of future information about the system(data dynamics and network randomness).Through extensive simulations,the proposed method can save an average system cost nearly by 46%(system cost in terms of energy consumption)compared to existing methods.Propose a decentralized model training method through dynamic topol-ogy construction To fully utilize the widely distributed data,we concentrate on a wireless edge computing system that conducts model training using decentralized peer-to-peer(P2P)methods.However,there are two major challenges on the way towards efficient P2P model training:limited resources(e.g.,network bandwidth and battery life of mobile devices)and time-varying network connectivity due to device mobility or wireless channel dynamics,which receives less attention in recent years.To address these two challenges,this dissertation studies the impact of topology con-struction on the P2P training performance.Specifically,we dynamically construct an efficient P2P topology,where model aggregation occurs at the edge.In a nut-shell,we first formulate the topology construction for P2P learning(TCPL)problem with resource constraints as an integer programming problem.Then,a learning-driven method is proposed to adaptively construct a topology at each training epoch.We fur-ther give the convergence analysis on training machine learning models even with non-convex loss functions.We evaluate the performance of our proposed algorithm through extensive simulations and physical platform.Evaluation results show that our method can improve the model training efficiency by about 11%with resource constraints,reduce the communication cost by 30%and the network traffic consump-tion by about 60%under the same accuracy requirement compared to the benchmarks.Propose a dynamic offloading and scheduling algorithm for intelligent tasks Besides training the model at network edge,we further study how to efficiently offload and schedule intelligent tasks for energy-delay optimization.We first propose a rounding-based dynamic offloading algorithm,named RMCL,which aims to min-imize the maximum energy consumption of mobile devices with task computation latency constraints in a mobile edge-computing(MEC)network.We also prove that RMCL achieves optimum with high probability.To make offloading decisions for tasks immediately,we extend offline RMCL to an online algorithm,named Online Dynamic Computing Offloading(OMCL).We then present a Maximum Residual Computational Density(MRCD)scheduling algorithm to determine a proper processing sequence for tasks offloaded to edge-nodes,so that the failure ratio of these delay-sensitive tasks can be reduced.The extensive simulation results show that OMCL can decrease the maximum energy consumption of mobile devices by 40%compared with executing computation tasks locally,and MRCD can reduce task failure ratio by at most 50%compared with the First-In-First-Serve policy.
Keywords/Search Tags:Edge Computing, Edge Intelligent, P2P Model Training, Distributed Model Training, Task Offloading
PDF Full Text Request
Related items