Font Size: a A A

An Energy Scheduling Framework For Edge Computing

Posted on:2022-09-07Degree:MasterType:Thesis
Country:ChinaCandidate:Z K WangFull Text:PDF
GTID:2518306572983059Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Edge computing emerges as an alternative provision of cloud computing with lowlatency thanks to its close-proximity to the users,which relives the computing and communication pressure of cloud center.At the same time,the geo-distribution nature of edge computing enables the servers to harvest green energy from the environment on-site,so it is also a promising paradigm of green computing.In pursuit of environment-friendly computing,it is always desirable to use as much green energy as possible,but the generation of green energy is intermittent and limited.When the green energy fails to meet the computing requirement,certain brown energy(conventional energy)inevitably needs to be drawn from the power grid,which usually provides electricity with step tariff.From the perspective of edge computing operators,it is significant to well schedule the computing resources and the energy resources to lower the long-term operational expenditure(OPEX)while meeting the spatially and temporally varying computing demands.This long-term service management and energy scheduling problem is formulated into a Markov Decision Process(MDP)problem,and a greedy model to minimize the one-shot energy cost is proposed,as well as the corresponding offline optimization method.In the scenarios where service requests and resource dynamics are predictable,this method can obtain the optimal solution or approximate optimal solution through offline optimization.However,in highly complex edge computing scenarios,the dynamics in the environment are usually unpredictable.Therefore,the model-based offline optimization method is no longer feasible.Inspired by the successful application of Deep Reinforcement Learning(DRL)in various domains,an improved algorithm p DQN(Prioritizing DQN)based on DQN(Deep Q Network)is proposed.The energy scheduling action was discretized to make it fit for DQN,and a sampling method combining the episode reward and the Temporal Difference Error(TD-error)was proposed to accelerate the training convergence of DQN.The algorithm continuously explores the online dynamic information in the environment(battery residual energy,green energy generation rate,electricity price,service request distribution,service location,etc.),learning the patterns of these dynamics,and adaptively makes service management and energy scheduling decisions,so as to pursue the goal of long-term cost efficiency.Extensive simulated experiments based on real-world traces verify the applicability of offline greedy optimization method in some scenarios with short-term predictable dynamic,as well as the efficiency of the online scheduling algorithm based on p DQN,and its convergence speed is 50% higher than DQN,long-term total cost reduced by 19% after convergence.
Keywords/Search Tags:Edge Computing, Deep Reinforcement Learning, Resource Scheduling
PDF Full Text Request
Related items