Font Size: a A A

Resource Control Optimization For D2D Communication Underlaying Cellular Networks

Posted on:2018-06-12Degree:MasterType:Thesis
Country:ChinaCandidate:Q Y HaoFull Text:PDF
GTID:2348330512975669Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
From the first generation of analog communication system represented by voice service to the fourth generation(4G)mobile communication system of large-scale commercialization,the mobile communication network has gone through a long process.Applying mobile cloud computing,mobile multimedia and other emerging content to the mobile communication network has become the next evolution goal of mobile communications.Now,with the rapid popularization of intelligent terminals and the explosive growth of network traffic,the communication technologies for fifth generation(5G)mobile communications have attracted many concerns in the industry.As a key candidate of 5G,Device-to-Device(D2D)communication has already become a research hotspot.After introducing D2D communication technology into cellular network,the resource allocation of network and users has changed greatly,so the optimal design of resource control becomes a research focus.Resource control includes mode selection,resource allocation and power control,more and more research works put these three functions together in order to achieve the best performance of the network.At present,most researches on D2D communication are based on the infinite backlog traffic model and the packet-level traffic model.However,this paper is based on the flow-level traffic model which is more suitable for the new generation mobile communication,and this paper mainly optimizes the mode selection and resource allocation,both of that are belong to resource control.To study the resource control optimization problem that the mean energy consumption of the flows transmission achieves minimum,the cellular network with D2D communication is assumed to use Orthogonal Frequency Division Multiple Access(OFDMA).Based on the queuing theory,this paper builds up the above optimization problem into the infinite horizon average reward markov decision process model.The classical solution of markov decision process model is Bellman's equation,which is equivalent to traditional centralized offline value iterative algorithm.However,in order to solve the well-known curse of dimensionality problem faced by markov decision process model,this paper reduces the Bellman's equation into equivalent Bellman's equation.The relationship between the Q-factor value function and value function in the equivalent Bellman's equation has been established,and use linear approximation method to simplify the global Q-factor value function.In addition,apply online stochastic learning algorithm to update per-queue Q-factor value function.Based on queuing theory and markov decision process,this paper proposes a distributed resource control algorithm that combines the equivalent Bellman's equation,linear approximation method and online stochastic learning algorithm,and then discusses the mode selection and resource allocation of D2D communication in cellular network to achieve the goal that minimizing the mean energy consumption of flows in the network.Comparing with other baseline algorithms by simulation,the simulation results show that the mean energy consumption of the proposed algorithm is the least.
Keywords/Search Tags:Device-to-Device Communication, Resource Control, Mode Selection, Resource Allocation, Flow-level Traffic Model
PDF Full Text Request
Related items