Font Size: a A A

Optimizing NUMA Scheduling In High Performance Virtualized Network

Posted on:2019-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:J S TanFull Text:PDF
GTID:2428330590492468Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Nowadays,virtualization has become the important technology in cloud computing due to its benefits in high consolidation and isolation to host multiple virtual machines(VMs)on a physical machine.Each VM has its own resources such as virtual CPUs,memory and I/O devices.Moreover,with the advent of NFV(Network Functions Virtualization),VMs are expected to perform more network-intensive workloads,such as packet processing and forwarding.Therefore,it becomes increasingly important to exploit network performance opportunities and improve the network performance of VMs.Meanwhile,NUMA(Non-Uniform Memory Access)architecture has become the mainstream architecture in modern server because of its scalability.In NUMA systems,each CPU can access its local memory faster than a remote one,which is known as a NUMA affinity problem.NUMA brings significant challenges for optimizing performance of VMs since the underlying physical NUMA topology is transparent to VMs.A large number of prior works has focused on optimizing the affinity between CPU and memory.However,with network speeds increasing,the NUMA affinity is no longer limited to CPU and memory only,because in NUMA systems,I/O devices such as NIC are attached to one NUMA node only.And only when the VMs run in the NUMA node where I/O devices attached,the affinity between VMs and I/O devices is optimal.In the past,the performance of memory was usually an order magnitude faster than early gigabit networks.However,40 Gigabit Ethernet and 100 Gigabit network bridge the performance gap between memory and NIC,which makes network affinity an important factor affecting VM's performance.To address these problems,this work studies how to optimize the performance of VMs based on the NUMA memory affinity and NUMA network affinity in high performance virtualized environment.Firstly,we give a deep analysis of performance influence from the holistic resource affinities,including memory affinity and network affinity in NUMA systems.Then we innovatively decompose the network affinity to the affinity between NIC and processor,the affinity between NIC and memory.After that,we quantify and model the affinity between virtual CPU and memory,the affinity between NIC and memory,the affinity between NIC and virtual CPU.Then,we establish an accurate performance estimator,named as RAIE,with awareness of tasks' behaviors running in the VM.RAIE relies on the holistic resource affinity parameters measured with the platform independent quantification approaches that need to be executed once for all in advance,as well as the VM behavior online monitoring without VM modification.The evaluation of RAIE proves that RAIE is a high accurate model with average 93% prediction accuracy.Based on our model,we design and implement a scheduler,named as RAIESched,to dynamically schedule VMs at runtime.RAIESched collects performance information of each VM at runtime and makes scheduling decisions based on RAIE.Our experimental results demonstrate that RAIESched can achieve better performance comparing with the KVM default scheduler.RAIESched can improve VM performance by up to 36% for real-world workloads in high performance virtualized network NUMA systems.
Keywords/Search Tags:Non-Uniform Memory Access, Network affinity, Virtualization
PDF Full Text Request
Related items