Font Size: a A A

Energy Adaptive Management Policy For Wireless Sensor Network Node

Posted on:2013-08-11Degree:MasterType:Thesis
Country:ChinaCandidate:H Z LiFull Text:PDF
GTID:2248330377460890Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
How effectively using energy is one of the key challenges in wireless sensor network(WSN). Turning off components and efficient data transmission are generally used toimprove the energy efficiency. In this dissertation, we design an energy adaptivemanagement (AM) policy to improve the energy efficiency for a sensor node, where thechannel and buffer state information are assumed to be always available at the receiver andthe transmitter, and packets arrive at a finite length according to a Poisson distribution. Theobjective is to minimize the expected cost that contains the energy consumption per packet,buffer overflow and the energy consumption of operational mode (OM) switching. First, amechanism, which dynamically turns on or off different components, and adjuststransmission power of the node, is proposed to conserve energy while maintaining requiredperformance. Second, a fragment transmission approach is proposed to improve the energyefficiency of data transmission. Then, we model the power management problem in asensor node as a Markov decision process (MDP), and use Q learning algorithm to searchan optimized strategy. Finally, the simulation is used to illustrate effectiveness of theproposed methods since the consumed energy is well balanced by utilizing the optimizedstrategy while the throughput of the node doesn’t decrease significantly. So the lifetime ofthe sensor node can be extended as long as possible without influencing the systemperformance.The system state of this dissertation is the combination of channel state and buffer state,the discrete degree of channel is small, and the fragment techonlogy is used in datatransmission, so the space and time complexity of the algorithm will increase. Further,state-clustering and the optimum fragment transmission are proposed to reduce thecomplexity of the algorithm. The results of the simulation show that, using state-clusteringand the optimum fragment transmission can search the optimal policy fast, the optimalpolicy is able to improve the energy efficiency without influencing system performance.
Keywords/Search Tags:Sensor Node, adaptive management, fragment transmission, MDP, Reinforcement learning, State-clustering
PDF Full Text Request
Related items