Font Size: a A A

Research Of Temperature Modeling And Workload Scheduling In Cloud Datacenter Towards Energy Saving

Posted on:2018-03-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:X LiFull Text:PDF
GTID:1368330548977406Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Recently,as the development of the Internet,especially with the uptake of Cloud computing,many enterprises begin to invest in construction of datacenters.As the infrastructure of enterprise business,datacenters consume enormous electricity when providing serverices,leading to high operational costs and becoming a main bottleneck.Therefore energy saving technology has become one of the key problems to be solved in Cloud datacenter.Datacenter energy consists of two parts:computing energy(by computing and networking equipment)and cooling energy(by cooling infrastructure).Each accounts for about a half of the total use.Most traditional energy-aware workload scheduling policies focus on a single part separately,having certain limitations:(1)policies considering computing energy typically involve consolidating workloads into fewer servers,however result in increased likelihood of high temperature hot spots needing additional cooling energy for removal.(2)if workload balance used to reduce hot spots and cooling energy,it will result in more active servers yielding higher computing energy.(3)traditional thermal-aware policies donot consider virtualization and cannot be applied to Cloud scenarios.(4)as the datacenter scale increases,server failures become a normality ranther than an exception that interrupt task execution.The re-execution of tasks costs additional energy,reducing system reliability and user experience.Therefore,this paper focuses on the high energy consumption issue and the limitations of traditional scheduling policies.Research are conducted from the perspective of survey,modeling,simulation,evaluation and workload scheduling.The contributions made in this paper are as following.(1)We creatively built a complete datacenter/server temperature model.We divide temperature model into two components.The first is datacenter temperature model,analyzing the thermal relationship between air conditioner and rack inlet air.The second is server temperature model,focusing on thermal relationship between rack inlet and server CPU.For the former part,traditional Computational Fluid Dynamics(CFD)technique is extremely time-consuming and cannot be applied to online scheduling.Our method simplifies CFD model with thermal priciples.It achieves fast and accurate predition of datacenter temperature distribution.For the latter part,we proposed a CPU temperature model for virtualized environment.We begins with temperature collection of virtualized system offline,then we build the model with machine learning technique by analyzing temperature data.Experiments show that the mean squared error of stable CPU temperature prediction is within 1.10,and dynamic CPU temperature prediction can achieve 1.60 for most scenarios.(2)We developed a visualized Cloud simulation system integrating thermal&energy models called DartCSim+.DartCSim+ is a Cloud datacenter simulation platform integrating temperature model,power model and network model based on CloudSim,which is a Cloud simulation toolkit initially developed by Gridbus lab of Melbourne University,Australia.This System is driven by discrete event signals.It is one of the most mainstream Cloud simulators,while there exists many limitations,such as(1)it does not support temperature/cooling simulation,(2)it does not support visualized operation,(3)energy model and network model are not compatible with each other.To fill the above gaps,first we enrich the model completeness by integrating the incompatible components within CloudSim and integrating our proposed temperature models.Second we adjust the event-driven simulation engine,modifying the update policies to support new added components/models.Eventually we improved CloudSim with interfaces,allowing users to conduct simulations with simple visualized operations.(3)We proposed new workload scheduling policies towards saving total energy.First we analysis the trade-off between computing energy and cooling energy in a macro scale,and proposed a genetic algorithm based scheduling policy.It performs well in small-scale problem but encounters difficulties in large-scale problem.Therefore we further proposed a greedy scheduling policy.It selects the server with the minimum power increase,combined with live virtual machine migrations,to reduce total energy draw.Parallel experiments with state-of the-art scheduling policies show that our method saves energy by 4.3-23.2%in average,and reduces the probability of hot spots by 92.3%.Further,we consider the impact by server failures,and proposed a total energy aware and failure aware scheduling policy.This method can further reduce more energy and increase system reliability and user experience.Experimental results show that it saves energy by 31%and increases task complete rate by 3.6%in average.
Keywords/Search Tags:Cloud datacenter, Temperature modeling, Energy saving scheduling, Failure aware, Simulation platform
PDF Full Text Request
Related items