Font Size: a A A

Research On Key Technology Of Energy Saving Of Cloud Data Center Based On Software

Posted on:2017-08-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:J C ZhouFull Text:PDF
GTID:1318330485965956Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
All the time, the high energy comsumption problem is the key restriction of sustained development of data center. With the new technologies developing quickly (such as cloud computing,big data etc.), the scale of data center are becoming more and more big and the energy consumption of data center rise synchronously. One research result show that the energy consumption of China in 2015is about 1,000,000 GWh. So, it is the time to think how to solve this problem.How to solve the high enrgy consumption problem of DC effectively? Different people have different methods. In this paper, we focuse on reducing the energy consumption of IT system of DC by using computting virtualization technology, network virtualization technology and storage virtualization technology. In here, we will ananlyze the ability of virtualization technology and try to use these new technologies to design a new enrgy-saving system prototype which can reducing energy consumption of DC dramatically. The contributions of this dissertation include:(1)For reducing the energy consumption of servers, We analyzed the relationship between workload of cloud data center and user's expect complete time and try to design a new resource assignment&scheduling model which focuse on executing tasks by using as possible as less resource and not affecting user's experience. We called this new model as 'User-aware Resource Provision Policy &Scheduling Model'.We design an algorithm BBTSA to analyze the user behavior data. Then, we can utilize probably theory to forecast the set of submitted task and expectation completing time of task at next time segment from the statistic results.After creating the policy table, system can dynamically adjust the resourceprovision policy according to the policy table and get the max VOC of unit resource. In order to evaluate the effect, we have done four-series experiment on HUTAF which is a cloud testing platform and was developed by Huawei. The result of experiment indicate that the proposed resource provision policy is effective for reduceing the energy consumption o f servers.(2)For reducing the energy consumption of array,we have designed a lot of experiment to analyz the duplication data pattern of cloud data center and design a new cluster architecture of deduplication data system according to the findings. We found that if two users have more corss-project then they will own more duplication data at a virtual desktop instrument system. So, we design a user-aware de-duplication algorithm.This algorithm break the rule of data locality and can work at the new rule of user locality. According to the new rule, it just need load one user's finger print data into memory for each user group. So it can reduce 5x-10x memory requirements than other algorithm and it can control the searching scope in a limited number for each checking besides. So this algorithm can avoid a lot of read IO operations. Meanwhile, this algorithm can adjust the searching scope dynamically according to the current workload of VDI system. Because this algorithm always try to get the best de-duplication rate but not affect the response time of VDI system. The prototype experiment result show that it can improve the performance of de-duplication algorithm, especially when it used in a massive data storage system. Besides, we found that the current cluster-based deduplication system have a fatal bug:deduplication efficiency, scalability and throughput always compete with each other and limit the counts of node of system. So, even the fastest system just can achieve a rate of 40T/H and can't deal with the requirements of Peta-scale. In this paper we present our high-performance cluster-based deduplication prototype which named HPDS, designed to eliminate the fatal bug, by decoupling the dependence between Sparse Index Table and Container Address Table for global deduplication work simplification and replacing data routing with finger-print routing to reduce the overhead between nodes.(3)For reducing the energy consumption of array, we design a new network system which can know what kind of data of business system was transmitted. So it is more smart than tranditional network device because it ont only forward data stupidly,but also can copy data or forward data automatically according to the information of business system by using the programmable ability of SDN. In this paper, we also design a new HDFS system which can push its MDS data to SDN controller and make SDN controller to modify its OpenFlow switch table. In this new system, switch will know the backup node of every data which need be forwarded and it can make a copy and forwar it to the backup node automatically. So it can reduce the amount of stream which need be transimitted and save the energy consumption of network device.
Keywords/Search Tags:Cloud Data Center, Cloud Computing, Software Defined Network, Deduplication, Software-based Energy Saving
PDF Full Text Request
Related items