Font Size: a A A

Study On Resource Management In Virtualized Environment

Posted on:2020-12-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:F GuoFull Text:PDF
GTID:1368330575966582Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
In virtualized environment(e.g,clouds),resource management is responsible for the allocation and scheduling of hardware resources,which provides the foundation for the efficient sharing of resources among multiple tenants.An efficient resource management mechanism not only improves resource utilization to effectively reduce cost,but also meets the resources demand of uses to guarantee the quality of service.However,due to the development of the hardware technology and the diversification of user requirements,existing resource management technologies not only cannot make full use of the increasing hardware resources,but also cannot meet the various demand of users.Therefore,how to combine the new hardware features,new requirements and new scene features to optimize the resource management mechanism in the virtualized environment has become an urgent problem to be solved.This dissertation focus on the three most important virtual resources(memory resources,computing resources,storage I/O resources)in the virtualized environment,and studies the management optimization problem for virtual resources.Specifically,it includes research on memory deduplication mechanism for hybrid page virtualization systems,online task scheduling mechanism for virtual GPUs,and I/O deduplication and cache management mechanism for Docker containers.The main research contents and contributions are as follows:(1)Research on Memory Deduplication Mechanism for Hybrid-Page Virtual-ization SystemsMost of the existing virtualization systems use a hybrid-page memory manage-ment(i.e,large pages and small pages),in which large pages can significantly improve the TLB(i.e,Translation Look-aside Buffer)hit rate and memory access performance.However,the use of existing memory deduplication technology in a hybrid-page virtu-alization system can cause a large number of large pages to be split,which significantly degrades memory access performance.To solve this problem,we propose SmartMD,an efficient memory deduplication mechanism for hybrid-page virtualization systems.Specifically,we first propose a memory monitoring mechanism to monitor the state of memory pages in real time.In this mechanism,we identify cold pages by scanning ac-cess bits of pages,and count pages' repetition rate using a counting bloom filter.Next,we implement a reconstruction facility by gathering scattered subpages of a split large page,and then carefully recreate descriptor and page table entry of the split large page.Finally,we propose an adaptive conversion scheme which selectively splits large pages to base pages,and also selectively reconstructs split large pages according to the access frequency and repetition rate of these pages and memory utilization.Experiment re-sults show that SmartMD can simultaneously achieve high access performance similar to systems using large pages,and achieves a deduplication rate similar to systems using base pages.(2)Research on Online Task Scheduling Mechanism for Virtual GPUsIn the GPU virtualization scenarios,the application scheduling is an important means to improve GPU utilization and ensure user service quality.However,the exist-ing static GPU scheduling methods cannot accurately obtain the resource demands of the application,and cannot perform real-time migration and online scheduling of GPU applications.As a result,the existing scheduling methods cannot make timely adjust-ments when the GPU load changes.Therefore,the existing static scheduling methods may cause many problems,such as load imbalance,high energy consumption,and low fairness.In order to further improve GPU utilization,energy efficiency and fairness,we design and implement a virtual GPU scheduling platform DCUDA,which supports "dy-namically" scheduling applications across multiple GPUs.In particular,we first pro-pose a low-overhead monitoring method based on tracking API parameters.Through this method,we can realize real-time monitoring of GPU utilization and application re-quirements.Then we also study how to implement live migration method transparent to users,and we proposed a series of techniques to guarantee the consistency of the runtime environment,the memory data and the task state during the migration.In addi-tion,we also proposed some optimization strategies to reduce the migration overhead,such as pre-initialization of environment and prefetching of memory data,etc.Finally,we propose a multi-stage and multi-objective scheduling strategy based on the live mi-gration method.This strategy can not only achieve dynamic load balancing between multiple GPUs,but also guarantee the fairness between applications via a time-slice and a priority-based scheduling policies.Furthermore,our scheduling strategy also can reduce the energy consumption of GPUs via a task-consolidation scheduling strategy.Experiments with our prototype system show that DCUDA can reduce 78.3%of over-loaded time of GPUs on average.As a result,for different workloads consisting of a wide range applications we studied,DCUDA can reduce the average execution time of applications by up to 42.1%.Furthermore,DCUDA also reduces 13.3%energy in the light load scenario.(3)Research on I/O Deduplication and Cache Management Mechanism for Docker ContainersDocker containers are widely deployed in data centers and cloud computing plat-forms as a lightweight virtualization technology.However,both the performance and cache efficiency of containers are still limited by their storage drivers due to the coarse-grained copy-on-write operations,and the large amount of redundancy in both I/O re-quests and page cache.To improve the I/O performance and cache efficiency of contain-ers,we develop HP-Mapper,a high performance storage driver for Docker containers.First,we propose a two-level block mapping strategy and an on-demand block allocation mechanism,which can flexibly support multiple granular write operations,and support fine-grained copy-on-write operations with minimal overhead.Then we propose an ef-ficient I/O detection and interception mechanism,which detects redundant I/O requests by recording very little metadata and using a small number of query operations,and sup-ports reading the required data from the cache of other containers.Finally,we provide an efficient cache management scheme.We take into account the memory utilization,the number of cache copies,as well as the access frequency of pages,in the design of cache management scheme so as to better utilize memory and improve container performance.Experiment results with our prototype system show that HP-Mapper sig-nificantly reduces copy-on-write latency due to its finer-grained copy-on-write scheme.Moreover,HP-Mapper can also reduce 59.2%cache usage on average due to elimina-tion of duplicated data.As a result,HP-Mapper improves the throughput of real-world workloads by up to 39.4%,and improves the startup speed of containers by 4.5x.
Keywords/Search Tags:Virtulizaiton, Resource Management, Memory Deduplication, GPU Scheduling, I/O deduplication, Cache Management
PDF Full Text Request
Related items