Font Size: a A A

Intelligent Spectrum Access For Dynamic Spectrum Environment

Posted on:2022-11-01Degree:MasterType:Thesis
Country:ChinaCandidate:J J ZhengFull Text:PDF
GTID:2518306764965899Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
The traditional static spectrum allocation method has been unable to meet people's increasing spectrum demand due to the continuous development of informatization.As a result,an efficient spectrum utilization mechanism is desperately needed,and dynamic spectrum access(DSA)technology was proposed.In recent years,reinforcement learning has become an important direction in this field,however,most existing techniques are only applicable to static spectrum environments.In this thesis we conduct research using reinforcement learning technology to solve the problem of intelligent decision-making policy in the dynamic spectrum environment.The purpose is to establish a DSA algorithm model with capable adaptive ability,high spectrum utilization efficiency,and low spectrum collision rate in the highly dynamic spectrum environment.Experiments are conducted to validate the performance of the proposed algorithms.The main work and contributions are given as follows:As a starting point,a research on DSA policy based on static spectrum environment is developed in this thesis.There are two modes of DSA technology: full duplex and half duplex.Although existing research on full-duplex DSA reaches excellent spectrum access performance,it is seldomly used in industry due to defects such as high communication costs and the self-interference issue.In this thesis we proposed the Sparse Spectrum Sense based Dynamic Spectrum Access(Sparse-DSA)scheme.By considering the gap time between spectrum sense slots,the scheme can give full play to the policy learning ability from reinforcement learning algorithm under half-duplex conditions.The simulation results show that this scheme outperforms the traditional half-duplex DSA method in terms of DSA.An improved Sparse-DSA scheme based on Q-function migration is proposed in this thesis with the aim to improve the adaptive ability in the dynamic spectrum environment.The Q-function transfer reinforcement learning algorithm,as a simple and efficient method,can significantly improve the learning performance of the target task by using the prior policy learned in the source task.The experimental results show that the improved scheme based on Q-function transfer can effectively improve the Sparse-DSA scheme's adaptive ability in the dynamic spectrum environment.Given the Q-function transfer algorithm's limitations,in this thesis we designs and proposes a new transfer reinforcement learning algorithm with knowledge screening mechanism: experience-buffer based transfer reinforcement learning,as well as the improved Sparse-DSA scheme based on experience-buffer transfer The experimental results show that,when compared to the Q-function transfer method,the experience-buffer transfer method outperforms the Q-function transfer method in terms of improving the Sparse-DSA scheme's adaptive ability in dynamic spectrum environment.The Sparse-DSA scheme is proposed in this thesis and two improved Sparse-DSA schemes are proposed subsequently.The experiment results show that both schemes have a high adaptive capability,a high efficiency of spectrum utilization,and a low rate of spectrum collisions.The improved scheme based on Q-function transfer is simple to implement and can be completed online in real time,but it has high performance requirements for the prior DSA policy.The improved scheme based on experience-buffer transfer eliminates this requirement from the design and has a stronger dynamic spectrum environment adaptive capability,but the pre-training process needs to be completed offline.
Keywords/Search Tags:Dynamic spectrum environment, half-duplex, dynamic spectrum access, DSA, DQN, transfer learning
PDF Full Text Request
Related items