Font Size: a A A

Scheduling Virtual Machines On NUMA Systems

Posted on:2016-04-03Degree:MasterType:Thesis
Country:ChinaCandidate:H H SunFull Text:PDF
GTID:2348330479953387Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
With the development of multi-core platforms and cloud computing, Non-Uniform Memory Access(NUMA) architecture has been dominant in cloud data centers in recent years. However, NUMA architecture is not well supported in virtualized environments. Because of the semantic gap introduced by the virtualization layer, hypervisors know little about the characteristics of applications running in virtual machines(VMs). More importantly, in order to guarantee hypervisors' applicability, load balance strategies of virtual CPU(VCPU) schedulers do not consider the memory access characteristics of applications running in VMs, which probably introduces significant shared resource contention and unnecessary remote memory accesses.We propose a NUMA-aware VCPU scheduler to improve the performance of memory-intensive applications while maintaining the transparency of the virtualization layer in NUMA-based servers. It collects performance monitoring units(PMU) data for each VCPU and analyzes their memory access characteristics. Then, according to the memory access characteristics of each VCPU, it periodically reassigns all memory-intensive VCPUs to each NUMA node evenly while preferentially allocating them to their local nodes, which aims to alleviate shared resource contention and reduce unnecessary remote memory accesses. Moreover, when a physical CPU(PCPU) becomes idle, it preferentially steals a VCPU from the run queues of PCPUs in the local node to this PCPU, which helps to maintain balanced last-level cache(LLC) contention and reduce extra remote memory accesses.The experimental results show that our new VCPU scheduler can significantly improve the performance of memory-intensive applications. More specifically, compared with the Credit scheduler of Xen hypervisor, our VCPU scheduler can achive 45.2% performance improvement. Meanwhile, the overheads introduced by our VCPU scheduler are negligible.
Keywords/Search Tags:NUMA, virtualization, shared resource contention, VCPU scheduling
PDF Full Text Request
Related items