Font Size: a A A

Research On Efficient Cloud Task Scheduling Algorithm Based On Deep Reinforcement Learning

Posted on:2021-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:L Y RanFull Text:PDF
GTID:2428330602986083Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Cloud computing is an important infrastructure in the modern information society.Cloud task scheduling,as the most important technology in cloud computing,is directly related to the interests of users and cloud service providers.Traditional cloud task scheduling algorithms often use a fixed strategy for scheduling.Such algorithms are simple to implement and have high scheduling efficiency,which solves the offline task scheduling problem to a certain extent.With the widespread application of cloud computing and the increasing scale of task scheduling,online task scheduling has problems such as high heterogeneity of data center clusters,high volatility in the cluster operating environment and the number of task submissions,and optimization of target diversity.Traditional algorithms cannot dynamically change their scheduling strategies to efficiently perform online task scheduling.In recent years,deep reinforcement learning has achieved great success in theoretical research.This thesis aims to study cloud task scheduling algorithms based on deep reinforcement learning to improve the above problems.The main content and innovation results of this thesis are as follows:1)In the scenario of single data center task scheduling,in view of the high heterogeneity of data center cluster,this thesis proposes cloud tasks scheduling algorithm based on improved deep Q learning algorithms by modeling the actions,states,and rewards of agents in deep reinforcement learning.In order to enhance the exploration ability of the agent,Boltzmann's action exploration strategy is used in the algorithm to improve the traditional action exploration strategy.Experiments on the simulated data set show that the algorithm has good convergence,the standard deviation of CPU utilization is reduced by 6.7%,which effectively promotes cluster load balancing,and the instruction response time of the task is increased by 5.3%,achieving high-performance cloud task scheduling;2)In view of the high volatility and high uncertainty of tasks in cloud computing,this thesis considers the correlation between multiple tasks,and schedules multiple tasks at the same time,but in reinforcement learning,this will cause the state space of the agent to explode.Therefore,this thesis proposes a task scheduling algorithm based on an improved deep deterministic policy gradient algorithm.In order to meet the optimization goal of cluster load balancing,the algorithm uses the earliest task scheduling algorithm to improve the action exploration strategy.When training an Actor network offline,there is a problem of slow convergence speed.This thesis adds a supervised training method to speed up the algorithm's convergence speed.Experiments performed on Alibaba Cloud's cluster tracking data show that the algorithm's instruction response time ratio is improved by 3.3%,which effectively improves the efficiency when multiple tasks are scheduled simultaneously;3)In the multi data center task scheduling scenario,for multi-objective optimization problems,this thesis uses cost-aware action strategies to improve the original deep deterministic policy gradient algorithm.Experiments performed on Alibaba Cloud's cluster tracking data show that the algorithm reduces the average cluster use cost by 1.3 yuan per hour on the condition that the task command response time ratio and cluster load balance are guaranteed,effectively solving the multiobjective optimization problem;4)This thesis refers to the open source cloud computing simulation platform CloudSim Plus,and independently designed and implemented a Python-based cloud task scheduling simulation system,which effectively improved the speed and convenience of the simulation experiment.
Keywords/Search Tags:Deep Reinforcement Learning, Cloud Computing, Cloud Task Scheduling, DQN, DDPG
PDF Full Text Request
Related items