Font Size: a A A

A Study Of High Performance Network Virtualization Technique

Posted on:2013-02-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:H B YangFull Text:PDF
GTID:1118330362467322Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
With development of computer hardware compared to the slower softwarecomputing, virtualization technique, has been deemed as one of the research pointsprompted by cloud computing. Through years of development in the field ofvirtualization,CPU virtualization and memory virtualization have developed to bequite advanced. However the relatively poor performance of I/O virtualization hasaffected the performance of virtualization as a whole. Thus currently in the field ofvirtualization research, experts have been focusing on how to improve I/Ovirtualization and the efficiency of physical device. Three kinds of I/O virtualizationdominate the international stream currently. They are Split I/O, Direct I/O andPassthrough I/O, all of which improve the performance of I/O devices in a degree, andgradually improve I/O devices virtualization performance by hardware support, butlimited by the design of its systematic structure and hardware limitations. Three I/Ovirtualization technologies still have a long way to be mature in respect ofperformance.Based on study of the current situation fo the related technology fo I/Ovirtualization, the dissertation proposes a series of solutions to these challenges. Indetail, the main contributions of this paper are the following:1. We propose a method that could bring down the CPU utilization and keep thethroughput. Interrupt coalescing is one of these methods, which has been widely usedin hardware. It will hold the data it gets for a while then send interrupts to CPUinstead of doing this immediately. By studying the architecture of Xen, we propose a network I/O scheme which could switch between different modes according to thenetwork traffic based on interrupt coalescing. By doing this, we save8%CPUUtilization for single VM and up to50%for9VMs. Moreover, throughputs in theseexperiments have not dropped at all. We abstract a dual-layer model from this:Physical Layer and Virtual Layer. Using the latency in these two layers, we do moreoptimization.2. According to the two limitations of scalability and dependence on hardwaresupport, we propose a generic virtualization architecture for SR-IOV devices, whichcan be implemented on multiple Virtual Machine Monitors (VMMs). With the supportof our architecture, the SR-IOV device driver is highly portable and agnostic ofunderlying VMM. Based on our first implementation of network device driver, weapplied several optimizations to reduce virtualization overhead. Then, we carried outcomprehensive experiments to evaluate SR-IOV performance. The results showSR-IOV can achieve line rate(9.48Gbps) and scale network up to60VMs at the costof only1.76%additional CPU overhead per VM, without sacrificing throughput. Ithas better throughout, scalability, and lower CPU utilization than paravirtualization.3. We design a distributed DBT framework DistriBit for thin clients. In thisDistriBit framework, the division of different parts is according to the functionalityand computing power of the server and thin client. Powerful server is responsible forthe work of the code translation and optimization, resource-limited thin client is onlyresponsible for code execution; we designed a code cache management strategy forthe thin client. Based on the cache size and code execution behavior of thin client, wedesign the cache management strategy to adapt to this specific situation on the serverend. With the guidance of this cache management strategy, thin clients can do morecomplex and effcient code management.
Keywords/Search Tags:I/O virtualization, Xen, SR-IOV, CrossBit, dynamic binary translator, thinclient, system virtual machine, process virtual machine
PDF Full Text Request
Related items