Font Size: a A A

Decentralized Control Architectures for Power Management in Data Centers

Posted on:2014-07-17Degree:Ph.DType:Thesis
University:Drexel UniversityCandidate:Wang, RuiFull Text:PDF
GTID:2458390005493307Subject:Engineering
Abstract/Summary:
Data centers host online services on distributed computing systems comprising heterogeneous networked servers. Virtualization technology is a promising solution to support multiple online services using fewer computing resources. It enables a single server to be shared among multiple performance-isolated platforms called virtual machines (VM), where each VM can serve one or more applications. Also, virtualization enables on-demand computing where resources such as CPU, memory, and disk space are allocated to applications as needed, based on the currently prevailing workload demand, rather than statically, based simply on the peak workload demand. By dynamically provisioning VMs and turning servers on/off properly, data center operators can maintain the desired quality of service (QoS) while achieving higher server utilization and lower power consumption.;Various techniques have been proposed to automate the system management tasks of computing systems. In terms of control architectures, a centralized controller, though offering the best performance, is only capable of managing the performance of a stand-alone server or a small-scale system comprising a few servers. Significant challenges must still be addressed to achieve real-time control of a large-scale computing system with multiple interacting components. Therefore, hierarchical control and decentralized decision making of computing systems is a recent phenomenon and an area of active research.;This thesis focuses on designing hierarchical and decentralized control architectures to manage the power and performance of large-scale computing systems. First, we propose a hierarchical control architecture to manage a virtualized server cluster hosting VMs and supporting online services. In this hierarchy, fully distributed local controllers optimize the CPU share of VMs under their control such that the aggregate CPU share provided to the cluster covers the incoming workload, while a supervisory controller on top dynamically shuts down the extra machines during periods of light workload to reduce the cluster's power consumption. Two different strategies, receding horizon control and neural network based control, are compared for the local controllers. We validate the framework on a cluster supporting three online services, showing that our scheme adapts quickly to dynamic workload changes, and is scalable and quite flexible in that servers can be added/removed anytime while maintaining overall system performance. Also, when managed using our control scheme, the cluster saves, on average, 20% in power-consumption costs over a three hour period when compared to a system operating without dynamic control.;Second, we propose a fully decentralized control architecture to further improve the scalability of the hierarchical design. Here, each controller manages one server: its inner loop constantly optimizes the per-VM computing resources to guarantee the service level agreement (SLA), and its outer loop appropriately switches the server or processor package on/off so that dynamic workload is consolidated onto the fewest number of active servers to reduce power usage. In addition, we organize the controllers in different fashions and analyze how the organizations affect the overall performance of large clusters with up to a thousand servers. Our studies indicate that the control structure, when organized as a causal system in which a precedence relation exists among the individual controllers, achieves a high degree of SLA satisfaction (> 98%) while significantly reducing the corresponding switching cost.;Finally, we extend our focus further to manage the power consumption of multiple geographically distributed data centers. Assuming each data center is controlled by a well-designed power management scheme such as the ones mentioned above, we develop a high level optimization framework so that data centers can earn financial reward when curtailing power consumption as requested by electric utilities. Specifically, we integrate the demand response (DR) program offered by the electricity market into data center operations and achieve power reduction in some centers by migrating live VMs to other centers. The optimizer aims to maximize the expected profit by trading off among reward, VM migration costs/time/distance, and risks from bandwidth and reward uncertainties. A set of case studies involving data centers participating in an economic DR program is used to validate the framework.
Keywords/Search Tags:Data center, Power, Control architectures, Decentralized control, Computing systems, Online services, Server, Manage
Related items