Font Size: a A A

Energy-Efficient Power Allocation Algorithm In Cognitive Radio Networks

Posted on:2022-12-30Degree:MasterType:Thesis
Country:ChinaCandidate:J F LuFull Text:PDF
GTID:2518306764472324Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of the radio network in recent years,the number of users of the radio network has increased rapidly,and the reduction of available spectrum resources has brought great challenges to the radio network.The traditional fixed spectrum allocation scheme only authorizes users to use the corresponding spectrum,resulting in the inability to make full use of this part of the spectrum,which is called idle spectrum.The development of Cognitive Radio(CR)technology effectively alleviates the pressure on spectrum demand.It provides idle spectrum to unlicensed users in need through a dynamic spectrum access mechanism to improve spectrum utilization.Nevertheless,the energy consumption of radio networks is still growing rapidly,so the research on green communication technology for sustainable development strategy is very important.However,the current related research work mainly focuses on improving the spectrum access capability of unlicensed users,user service quality and network throughput,and rarely pays attention to the problem of network energy efficiency.Based on this,thesis aims to improve the energy efficiency of cognitive radio networks.Under the Underlay spectrum access mechanism,combined with the reinforcement learning method,the power allocation problem of CR is studied.The main achievements and contributions are as follows.1.Aiming at the problem that the traditional energy efficiency optimization method is difficult to apply to the dynamic radio network environment,a CR independent learning energy efficiency optimization power allocation algorithm is designed by using the reinforcement learning method.The algorithm aims at network energy efficiency.Under the condition of not interfering with the normal communication of authorized users and ensuring the quality of CR service,the optimal power allocation scheme can be obtained through multiple interactions between the CR and the radio network environment,which improves the performance of the dynamic radio network.adaptability to the environment.Among them,the CR interacts with the environment independently,and does not cooperate with other CRs.Then CR adopts the Q-Learning framework(off-policy)and the SARSA framework(on-policy)for learning iterations.Finally,Monte Carlo simulation of the algorithm is carried out under the radio network extension model.The simulation results show that the proposed algorithm not only improves the network energy efficiency,but also improves the network throughput compared with the game theory method.At the same time,compared the simulation results with the network throughput as the optimization goal under the proposed algorithm,the results show that the scheme with the network energy efficiency as the goal reduces the network throughput to a certain extent without affecting the CR service quality,but Significantly improved network energy efficiency.2.Aiming at the problem of excessive storage and access overhead of a single Qtable in the traditional CR collaboration scheme,based on reinforcement learning,an energy-efficiency optimization power allocation algorithm for CR random team collaboration is designed.In this algorithm,a CR only forms a team with other CRs at the CR base station where it is located.In the interactive learning process with the environment,CR adopts the optimal strategy of other CRs with cooperative probability,thus avoiding a series of problems caused by using a single Q-table.Meanwhile,the CR random team cooperation scheme is easier to obtain the optimal power allocation strategy than the CR independent learning scheme.Among them,CR adopts the Q-Learning framework and the SARSA framework respectively for learning iteration.Finally,Monte Carlo simulation of the algorithm is carried out under the radio network extension model.The simulation results show that,compared with the energy-efficiency-optimized power allocation algorithm of CR independent learning,the proposed algorithm performs better in both network energy efficiency and network throughput,especially the strategy obtained under the SARSA framework.3.For the problem that different parameters in the proposed algorithm may have different effects on the performance of the algorithm.Four key factors are mainly considered here: the number of CRs,the signal-to-interference-to-noise ratio threshold of CRs,the reward discount factor g and the collaboration probability ?.Under the two algorithms of CR independent learning and energy efficiency optimization power allocation of CR agent random team cooperation,the Q-Learning framework and the SARSA framework are used to simulate and evaluate the influence of the four parameters on the performance of the algorithm.The simulation results show that when the number of CRs is larger,the network energy efficiency is lower;when the signal-to-interferencenoise ratio threshold is higher,the network energy efficiency is also lower;the reward discount factor g should be properly considered to avoid the algorithm falling into the local optimal value;collaboration The higher the probability l,the faster the algorithm converges,and vice versa.Among them,the energy-efficiency-optimized power allocation algorithm based on the random team collaboration of CR agents based on the SARSA framework performed the best in all experiments.
Keywords/Search Tags:Cognitive Radio, Energy Efficiency, Reinforcement Learning, Power Allocation
PDF Full Text Request
Related items