Font Size: a A A

Research On D2D Power Control Algrothims Based On Data-Driven And Optimization-Driven

Posted on:2019-01-30Degree:MasterType:Thesis
Country:ChinaCandidate:Z Q FanFull Text:PDF
GTID:2348330545955739Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
As a key technology of 5G,D2D communication technology has attracted people's eyes since it was proposed.It not only can tackle the challenge of the scarcity of spectrum resources in wireless communication system,but also can improve the communication performance of the cellular network such as system throughput and spectrum efficiency and energy efficiency.However,the emergence of any new technology will have two sides,D2D technology not only improve the performance of the communication system,but also bring new communication interference into the system.So it is particularly important to study power control technology to reduce the interference between users.This paper mainly focuses on the power control problem of D2D communication for cellular users and D2D users in hybrid networks.The interference caused by D2D technology,and the traditional optimization-driven power control algorithm of open-loop power control algorithm and closed loop power control algorithm,are investigated and analyzed.Then,the core machine learning algorithm of artificial intelligence is researched.Taking the reinforcement learning of Q-learning algorithm as the starting point and the goal of maximizing the throughput or energy efficiency,the optimization-driven power control algorithm is designed,called distributed Q-learning power control algorithm.However,the convergence speed of the algorithm is slowly and takes a long time.The convergence speed of Q-learning algorithm depends on the size of the state set and the action set,so a distributed Q-learning power control algorithm based on dynamic action set is designed and proposed by dynamically adjusting the range and size of the action set,that is,by adjusting the power granularity and power range to accelerate the convergence of the algorithm firstly.Then,this paper introduces Docitive learning,which enables collaborative learning among multiple users of D2D agents and weights the outcomes of shared learning,and designs a power control algorithm based on Docitive Q-learning to speed up the convergence of the algorithm from another perspective.Besides,this paper introduces supervised learning to solve the D2D power control problem with the idea of data-driven.Supervised learning can learn the model offline in advance using priori data and then apply it online,which can save the time cost while utilizing the historical data.This paper uses Q-learning to generate sample and extract features,and firstly combines the decision tree algorithm with power control.Then,in order to improve the accuracy of the model,this paper combines the fusion model of gradient boosting decision tree algorithm and logistic regression algorithm with power control.Finally,the proposed algorithms mentioned above are compared in terms of throughput,energy efficiency and time complexity.Simulation results show that the proposed algorithms have better performance than the traditional optimization-driven open-loop power control and closed-loop power control algorithm.
Keywords/Search Tags:D2D communication, power control, machine learning, Q-learning, docitive learning, logistic regression, gradient boosting decision tree
PDF Full Text Request
Related items