Font Size: a A A

Performance-Aware Scheduling For Data-Intensive Cloud Computing

Posted on:2012-05-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Shadi IbrahimFull Text:PDF
GTID:1118330368984107Subject:Computer Architecture
Abstract/Summary:PDF Full Text Request
Data volumes are ever growing, from traditional applications such as databases and scientific computing to emerging applications like Web 2.0 and online social networks. This has driven intensive research on scalable data intensive systems, including MapReduce and Dryad. Among those systems, Hadoop, an open-source MapReduce implementation, is widely adopted by companies such as Facebook and Google, and academia. Recently, MapReduce has been deployed in the cloud as a software-as-a-service. Due to its wide adoption, the performance of Hadoop in particular (and MapReduce in general) has received much attention in system research. Meanwhile, virtual machines (VM) have become increasingly important for supporting efficient and flexible resource provisioning. By means of this technique, cloud computing provides users with the ability to perform elastic computation using large pools of VMs, without facing the burden of owning or maintaining physical infrastructure. To this end, when building large scale data intensive systems-data intensive cloud computing-developers need to understand the principles of designing large systems to get performance guarantees, load balancing and fair charging for use of resources. Performance in data-intensive cloud computing is contributed by many factors including data locality, application types and the underlying cloud infrastructure which is mainly VM-based.First of all, a novel replica-aware map execution named Maestro is presented to overcome the non-local map execution in MapReduce system. In Maestro, map tasks are scheduled in two phases. The first one, first wave scheduling, schedules the maps when the job initializes to fill all the empty slots, and the second one, run time scheduler, schedules the map tasks according to data locality, node availability and block weight, which is the probability of the best replication to schedule the task. Interestingly, Maestro not only can efficiently achieve higher locality in MapReduce-like systems, but can also reduce unnecessary Map task speculation and balance the intermediate data distribution before the shuffle phase.The existing MapReduce system overlooked the data skew problem that occurs when significant variance in both intermediate keys' frequencies and their distributions among the different data nodes is introduced, referred to as Partitioning Skew. Experimental results with Hadoop demonstrate that, in the presence of partitioning skew, the applications experience performance degradation due to the long data transfer during the shuffle phase along with the computation skew, particularly in the reduce phase. To address this problem, a novel algorithm for locality-aware and fairness-aware key partitioning in MapReduce is developed, referred as LEEN. LEEN embraces an asynchronous map and reduce scheme. All buffered intermediate keys are partitioned according to their frequencies and the fairness of the expected data distribution after the shuffle phase. LEEN can not only efficiently achieve higher locality and reduce the amount of shuffled data, but also LEEN guarantees fair distribution of the reduce inputs.In the cloud, the computing unit is virtual machine (VM) based; therefore, it is important to demonstrate the applicability of data-intensive computing on a virtualized data center. Although virtualization brings many benefits such as resource utilization and isolation, it poses, due to VM interference, a challenging problem for performance predictability and system throughput for large-scale virtualized environments. To this end, a quantitative analysis on the impact of interference on the system fairness is presented. Because Cloud is an economics-based distributed system, the concept of pricing fairness is adopted from micro economics. As a result, the current pay-as-you-go is neither personally nor socially fair. Accordingly, to solve the unfairness caused by interference, new pricing scheme (pay-as-you-consume) is proposed. In the pay-as-you-consume pricing scheme, users are charged according to their effective resource consumption excluding interference. The key idea behind the pay-as-you-consume pricing scheme is a machine learning based prediction model on the relative cost of interference. The preliminary experimental results with Xen demonstrate the accuracy of the prediction model, and the fairness of the pay-as-you-consume pricing scheme.The introduction of virtualization in Hadoop clusters poses new challenges due to the architectural design of the hypervisor. A series of experiments are conducted to measure and analyze the performance of Hadoop on VMs in terms of Hadoop Distributed File System (HDFS) throughput, performance variation with different VM consolidation and configuration, and task speculation. As a result, this dissertation outlines several issues that will need to be considered when implementing MapReduce to fit completely on virtual machines-such as decoupling the storage system (HDFS) from the computation unit (VMs). Later, a novel MapReduce framework that runs on virtual machines, called Cloudlet, is proposed.Virtualization interferences are contributed to by intertwined factors including the application's type, the number of concurrent VMs, and the VM scheduling algorithms used within the host. Further studies revealed that selecting the appropriate disk I/O scheduler pairs can significantly affect the applications performance. Furthermore, a typical Hadoop application consists of different interleaving stages, each requiring different I/O workloads and patterns. As a result, the disk scheduler pairs are not only sub-optimal for different MapReduce applications, but are also sub-optimal for different sub-phases of the whole job. Accordingly, a novel approach for adaptively tuning the disk scheduler pairs in both the hypervisor and the virtual machines during the execution of a single MapReduce job is proposed. Experimental results show that MapReduce performance can be significantly improved; specifically, adaptive tuning of disk scheduler pairs achieves a 25% performance improvement on a sort benchmark with Hadoop.
Keywords/Search Tags:Cloud computing, Virtualization, MapReduce, Hadoop, Replica-aware, Skew partitioning, Meta-Scheduler, Fairness
PDF Full Text Request
Related items