Font Size: a A A

Research On Composition And Scheduling Of Services Based On Reinforcement Learning

Posted on:2022-09-25Degree:MasterType:Thesis
Country:ChinaCandidate:X Z YuFull Text:PDF
GTID:2518306488492594Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of micro-service architecture and cloud computing technology,how to reasonably combine services and how to efficiently schedule resources for these services after combination has gradually become the focus of researchers in the industry.This paper focuses on the service composition and task scheduling under the microservice architecture,and carries out related researches respectively.Traditional service composition methods usually only consider static Web services when solving constraint-satisfied services composition(CSSC in short)problem.When the non-functional QoS properties of the service fluctuate or the service is not available in dynamic scenarios,these methods are no longer applicable.In recent years,many researchers have proposed to use reinforcement learning,especially Q-learning,to solve dynamic CSSC problems.However,this strategy algorithm relies on Q table to search for the optimal candidate services.When the CSSC problem becomes complex and huge,the Q table will also become larger,which makes the final combination cost very high.According to constraints of service composition problem,this paper proposes a new model of service composition algorithm CSSC-DQN,this model combines the depth of reinforcement learning and service composition problem,can effectively use fine-grained nonfunctional QoS attribute uncertainty modeling of services,and in complex scenes dynamically select the appropriate candidate services are combined.Finally,the comparative experimental results in real data sets prove the advanced and strong generalization ability of this model.In addition,after the combination of appropriate services,it is necessary to conduct reasonable resource scheduling for the combined services.A good task scheduling strategy is of great significance to the quality of services obtained by users and the resource utilization of service providers.Traditional scheduling schemes usually require a scheduling plan first and then all tasks are scheduled according to the scheduling plan.However,this also makes these scheduling schemes unable to respond flexibly and quickly when confronted with emergencies,and the dependencies between tasks are often ignored by these schemes,which will lead to delayed scheduling and unnecessary waste of resources.Nowadays,many cutting-edge tasks begin to adopt task scheduling schemes based on reinforcement learning,which often rely on Q table.With a large number of tasks in a cloud environment,these solutions are vulnerable to state explosions.To solve the problem of task scheduling in cloud environment,this paper designs a multi-stage task scheduling framework MSFRL based on deep reinforcement learning.The framework first trains a deep reinforcement learning agent to calculate the priority of a task,taking into account its attributes and dependencies.The framework then trains another agent to assign the cluster's available resource scheduling to the appropriate tasks according to task priority.Two trained agents can cooperate to make appropriate scheduling decisions to deal with large-scale task scheduling problems in a dynamic environment.In the end,this paper evaluates the scheduling framework of this paper using the public Alibaba data set.The experimental results show that the proposed MSFRL scheduling framework is superior to other frontier work.
Keywords/Search Tags:reinforcement learning, micro-service architecture, service composition, task scheduling
PDF Full Text Request
Related items