Font Size: a A A

Research On Distributed Edge Caching Based On Deep Reinforcement Learning In Fog Radio Access Network

Posted on:2022-01-18Degree:MasterType:Thesis
Country:ChinaCandidate:J YanFull Text:PDF
GTID:2518306740496174Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
The rapid growth of smart devices and mobile application services has brought huge traffic pressure to wireless networks.Fog Radio Access Network(F-RAN)is increasingly attracting the attention of researchers and engineers because it can effectively improve the performance of wireless networks by placing popular files around users.In F-RAN,Fog Access Points(F-APs)are edge devices equipped with limited storage and computing resources.Due to time-varying user requests and the limited storage capacity of F-AP,each F-AP needs to cache files strategically to achieve higher caching efficiency.Most distributed edge caching methods based on traditional optimization algorithms assume that file popularity is known and static,which is not in line with the real situation.The reason for this problem is that traditional optimization algorithms are not suitable for solving cache optimization problems with dynamic assumptions.To solve these problems,based on the assumption of time-varying and unknown file popularity distribution,this paper studies the distributed edge caching method based on deep reinforcement learning in F-RAN.Firstly,the distributed edge caching method is studied when the file popularity is unknown in the F-RAN.Distributed edge caching optimization problem is modeled to maximize the long-term net communication profit of each F-AP,that is,the fee charged by mobile network operators for user requests minus all the communication transmission costs during the communication process.Then,the reinforcement learning(RL)algorithm is used to solve the caching optimization problem,and the relevant important parameters in the RL are defined in turn based on the current F-RAN architecture and optimization objective.A time-varying personalized user request model is proposed to generate user request data to form the external environment of RL.Finally,in order to solve the dimension explosion that often occurs in RL and to accelerate the convergence speed,a distributed edge caching algorithm based on the Double Deep Q-network(DDQN)algorithm is proposed to find the optimal caching strategy,where DDQN algorithm is a deep reinforcement learning(DRL)algorithm.Simulation results show that the performance of this algorithm is nearly half better than that of traditional caching methods.In addition,compared with RL,the distributed edge caching algorithm based on DRL has faster convergence speed and better caching performance.Secondly,considering that user requests can be affected by content recommendations,the uncertainty and prediction difficulty of user requests can be reduced by introducing reasonable content recommendations.Therefore,on the basis of the above research,the distributed edge cache problem with unknown file popularity and considering dynamic content recommendation in F-RAN is studied.Firstly,the F-RAN is re-modeled and content recommendation mechanism is introduced into it.Then,the recommendation strategy is merged into the original caching strategy,so that the combined caching and recommendation strategy are transformed into a single caching strategy,and the subsequent corresponding training complexity is halved.In order to match the content recommendation policy,the influence mechanism related to the content recommendation policy is added to the original user request model.Finally,with maximizing the long-term net communication profit of each F-AP as the goal of cache optimization,a distributed edge cache algorithm based on DDQN considering time-varying content recommendation is proposed to find the optimal cache strategy.The simulation results show that content recommendation can improve the convergence speed and cache performance of the original cache algorithm to some extent.Finally,on the basis of the above research,the distributed edge caching method considering time-varying content recommendation is studied when the file size is inconsistent and the popularity is unknown in F-RAN.Most of the existing studies dealing with unencoded caching methods assume that file sizes are uniform in cloud servers,and while this assumption can greatly simplify the modeling and optimization of caching policies,it does not match the actual file state in cloud servers.Therefore,the limitation of file size in cloud server is removed in the current system modeling,and the time-varying personalized user request model considering content recommendation is extended accordingly.Then,a ‘pre-split' mechanism with dynamic upper bound is proposed for F-AP file caching process.In order to solve some problems in this mechanism,a lazy update mechanism is proposed to act on the training of relevant parameters.Based on the above assumptions,the distributed edge caching problem with time-varying content recommendation was re-modeled to maximize the long-term net communication profit of each F-AP.Finally,a distributed edge caching algorithm based on DDQN is proposed to solve the above problem when the file size is inconsistent and the time-varying content recommendation is considered.The simulation results show that the proposed caching algorithm can adapt to the inconsistent file sizes in the cloud server,and the cache capacity occupied by the current cache files in F-AP can be dynamically adjusted according to user requests.
Keywords/Search Tags:Fog radio access network, Distributed edge caching, Deep reinforcement learning, Content recommendation, User request model
PDF Full Text Request
Related items