Font Size: a A A

Research On Performance Optimization Of Large Scale Elastic Resource In IaaS Cloud Computing

Posted on:2015-08-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z N ZhangFull Text:PDF
GTID:1108330509960959Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The development of cloud computing has changed the way people use the computing resources, which has already led to a revolutionary transition of information technology from the computing to the service. Infrastructure-as-a-Service(Iaa S) is one of most prevalent service mode. Iaa S cloud provide the hardware infrastructure to users as a quantifiable, scalable virtual resources, which we call it elastic virtual resources, including computing resources, storage resources and network resources, etc. The performance of the resources is a key factor, which determines the costs and quantity of the service,which gives both providers and users a cause for concern. Improving the performance of the elastic resources is facing challenges, and is of inestimable value in both academical and industrial society.Nowadays, the provisioning of large scale virtual machine cluster takes long latency,the traditional solution of the virtual image transferring measured in hours. This is because that virtual machine booting needs to load mass of data into the memory based on the virtualization technique. Most of traditional solutions can not combining on-demand read and storage mitigation methods. This paper present block device based peer-to-peer virtual machine cluster provisioning methods, called VMThunder. Furthermore, to reduce the application provisioning latency, an improved method called VMThunder+ is also been introduced to boot with full application status. Both of the VMThunder/VMThunder+achieve hundreds of virtual machines provisioning within 10 seconds to 20 seconds. The methods have been deployed and evaluated in National Super Computing Center- Tianjin.To achieve complex functions and convenient management, virtual storage systems are designed into multiple virtual abstract layers. However, this kind of structure causes performance degradation, specially when the virtual abstract layers cross network. To overcome this, read ahead is the most important mechanism to cover the performance gap between the layers. The read ahead mechanism encounters new problem when the number of layers increases, like relayed read ahead and overlapped read ahead. More specifically, traditional read ahead open its read ahead window too aggressively, which causes too huge data flow at the initiation phase of a single sequence. And, the more sequences in the request, the larger influence this will cause. This paper, firstly, proves the relayed and overlapped read ahead phenomenon is essential for multiple layer structure.Then, we present a new read ahead window synchronizing mechanism, which call sectorial synchronization mechanism. It is more conservative and saves significant time at the initiation phase. The evaluations and tests show that this new mechanism provides a 20%to 50% improvement to the read performance.The migration of virtual machine makes the availability of the virtual machine higher than the availability of the physical machine. However, traditional pre-copy migration technique needs to move large quantity of data before the migration, during which there is also new dirty data produced continuously. This may cause large latency of the live migration. However, the existed post-copy solutions are not of generality. This paper presents a storage migration solution DLSM, and a live migration(memory) solution BDLM. They are light weight post-copy solutions which leave the Hypervisor unmodified. BDLM makes the memory data into block device by balloon over inflation, which reuses DLSM in BDLM and helps the BDLM post-copy the migration data in little weight. DLSM reduces the migration time by 50% to 70% than CSI storage migration method in KVM.BDLM can maintain the same down time with the original KVM migration, and reduce the migration time significantly. For example, it reduces the migration time from 90 s to20s, with 1GB workload and 115MB/s dirty data production rates.
Keywords/Search Tags:Cloud Computing, Infrastructure-as-a-Service, Elastic Resource, Fast Provisioning of Large Scale VM Cluster, Performance Optimization for Virtualized Storage, Efficient VM Live Migration
PDF Full Text Request
Related items