Font Size: a A A

Research On Virtual Machine Scheduling For Network Optimization In Clouds

Posted on:2020-08-19Degree:MasterType:Thesis
Country:ChinaCandidate:Z LianFull Text:PDF
GTID:2428330590472688Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the development and popularization of cloud computing,cloud data centers have been widely used in daily production and life.More and more applications rely on cloud system resources to provide services.Resource sharing and management based on virtualization technology in the form of virtual machines is the most common method.The development of Internet of Things(IoT)and big data technologies has made data-driven services mainstream gradually,and application entities have also been expanded from cloud data centers to edge cloud systems,forming a new cloud-edge fusion system(cloud-side system).These application services need to process massive data,which results in a large amount of data transmission within the cloud-side system.For data-intensive application services with real-time requirements,network resources become the key factor affecting Quality of Service(QoS).However,there are certain flaws of virtual machine(VM)deployment for data-intensive services in a cloud-edge converged environment.Therefore,exploring the VM deployment mechanism and resource management method to optimize network in the cloud-side system is significant for optimizing the QoS of data-intensive applications.This thesis focuses on VM scheduling for network optimization in the cloud-edge convergence and data center to reduce application delay and improve QoS.The specific work is as follows:(1)For the low latency requirements of data-driven cloud applications,considering the low latency of nodes in edge computing and the powerful computing power of data centers,this thesis proposes a solution DDP(Data-Driven Placement)to the problem of low-latency VM deployment for datadriven applications in cloud-edge converged environments.The solution can flexibly select the physical machine that provides resources for the service according to the location of the original data required by the application service,balance the data transmission delay and processing delay,reduce the total service delay effectively,and propose for online tasks and batch tasks.The effectiveness of the solution is verified by experiments,and the service delay is greatly reduced compared with the greedy strategy.(2)For the large amount of data transmission and limited network bandwidth in the data center,this thesis proposes a heuristic VM deployment algorithm ECBP(Edge-Cut Based Placement)for data center network load balancing,ensure efficient data transmission between VMs and balance network load to reduce service delay.The tenant request is modeled as a weighted undirected graph,and the topological characteristics of the request model are analyzed.When a single physical machine cannot meet the resource requirements of the tenant,the original tenant request is divided into several sub-requests by the minimum edge cut set and then the sub-request is deployed to the physical machine according to the current network load of the data center.The solution not only minimizes the maximum link utilization to balance the network load,but also avoids network congestion and reduces service delay,and can reduce network overhead to a certain extent.(3)We analyze the open source cloud resource management platform OpenStack and related components,and adopted three different deployment schemes.Based on the VM deployment of data center network load balancing,the HOT template of Heat component in OpenStack and the core component Filter Scheduler of VM scheduling are implemented.
Keywords/Search Tags:Cloud-edge convergence, data center, low latency, network optimization, Quality of Service
PDF Full Text Request
Related items