Font Size: a A A

Research On Wireless Caching Technology Based On Deep Reinforcement Learning In Small Cell Network

Posted on:2021-05-09Degree:MasterType:Thesis
Country:ChinaCandidate:P Y WuFull Text:PDF
GTID:2518306512486934Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the rapid increase in the number of ubiquitous mobile devices and network multimedia applications will produce the sheer volume of traffic load,increasing network transmission pressure.Mobile edge computing and wireless caching technology utilize the computing and storage capacity of small-cell base station(SBS)to cache popular multimedia at the wireless edge,thereby reducing data redundancy,accelerating the content download speed and enhancing the user experience.In this thesis,three algorithms are designed to improve cache hit rate and reduce the traffic load in the SBS network.In this thesis,the main contributions are as follows.(1)In the wireless network with a single SBS and multiple users,the traditional cache update methods cannot accurately capture dynamic characteristics of content popularity and user request,this thesis proposes a dynamic content update algorithm based on deep reinforcement learning(DRL).The proposed assist memory-based recurrent Q-network algorithm can optimize the cache policy in real time according to stochastic environment,enhance cache decision-making ability of SBS.The simulation results show that the proposed algorithm can achieve higher average reward and cache hit rate than existing update strategies such as the least recently used,first-in first-out,and deep Q-network based algorithms.(2)In the wireless network with multiple SBSs,this thesis proposes a double deep recurrent Q-network algorithm for centralized content update.The proposed algorithm obtains local system state of each SBS to form global state,and coordinates cache update process of SBS by analyzing the global state.The simulation results show that this algorithm performs better in cache hit rate compared with random and traditional DRL based algorithms.(3)Due to the centralized content update algorithm with higher complexity,this thesis proposes a dynamic cooperative content update algorithm based on federated learning(FL).This algorithm combines asynchronous advantage actor-critic with FL to train SBS model in a more efficient way.Compared with the centralized method,this algorithm keeps the training data in SBS without additional information exchange.The simulation results show that this algorithm performs well in long-term system reward,cache hit rate,and transmission cost.Finally,the thesis analyzes the whole work and looks forward to the follow up research.
Keywords/Search Tags:mobile edge computing, wireless caching, deep reinforcement learning, federated learning, content update, cache hit rate
PDF Full Text Request
Related items