Font Size: a A A

Dynamic Power Management Based On Stochastic Policies

Posted on:2011-04-23Degree:MasterType:Thesis
Country:ChinaCandidate:Q ZhengFull Text:PDF
GTID:2189360332458160Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the development of our technology, the energy saving program is more and more important to everybody. In the process of our civilization, we consume much resource and most of it is not reproducible. Dynamic power management is the technology that deals with this problem. It manages the power dynamically by letting the power stay at the lower power states instead of the active states with-out affecting its normal performance.We set the system to different power states through the basic research of our power management system model. When the power stays at the lower power states, the system works less actively than the normal power states. The perfor-mance is less than those in active states, since the power saving is on the cost of performance.In this thesis, I consider the computer disk as our object. Firstly, model the system as three parts, which contains the service requester, service provider and service queue. The service requester component is modeled as a hidden Markov process; both the service provider and the service queue are modeled as a dis-crete-time Markov process. The whole system is the synthesization of the three components which is a hidden Markov process. Secondly, optimize the system. We take the whole process as a partial observable Markov decision process (POMDP). We introduce the finite state controller (FSC) into the system to help decision making. Thirdly, we make the optimal policy on the basis of the stochas-tic model. Our policy optimization algorithms contain quadratic constraints linear program (QCLP) algorithm and bounded policy iteration (BPI) algorithm. Then verify the discounted reward and average reward of the QCLP algorithm. We get the result by a neos server which is available on line. Finally, analyze the result. We compare the results of the QCLP algorithm and BPI algorithm; find that QCLP algorithm is more useful in large amounts of constraints problem while BPI is more robust for the initial unknowing problem. Also, we compare the re-sults of the stochastic policy with the N-policy on the hard disk power manage-ment model. The results shows that stochastic policy works better than the de-terministic policy.
Keywords/Search Tags:dynamic power management (DPM), hidden Markov model(HMM), partial observation Markov decision process (POMDP), stochastic policy, finite state controller (FSC)
PDF Full Text Request
Related items