Font Size: a A A

Efficient Edge Caching Strategy Based On Machine Learning

Posted on:2022-07-28Degree:MasterType:Thesis
Country:ChinaCandidate:R WuFull Text:PDF
GTID:2518306572481694Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the development of multimedia applications and the rapid growth of the number of mobile devices,the mobile data traffic in the network is growing at an unprecedented rate.Meanwhile,research shows that most of the increased mobile data traffic in the network is mainly contributed by different users' repeated requests for a small part of the same content.This means that we can use edge caching technology to cache some popular content at the edge of network to reduce repeated transmission,so as to alleviate network traffic congestion,further reduce the average delay of users to obtain the required content,and improve the quality of user experience.In this context,this paper studies the efficient edge caching strategy based on machine learning.In the traditional cache strategy research,the authors usually assume a relatively static network environment,and further assume that the content popularity is fixed.However,in the actual network,content popularity will show dynamic changes in time and space.In order to adapt to the dynamic content popularity,this paper proposes an efficient edge caching algorithm based on deep reinforcement learning by combining edge caching technology with deep Q network in reinforcement learning,which makes the system more accurately predict the development and evolution of content popularity and improve the long-term cache-hit-ratio of the system.Simulation results show that the proposed algorithm is superior to the three benchmark algorithms commonly used by researchers.Since the traditional deep reinforcement learning-based caching sttategy needs to concentrate all users' data to a central server for model training,in this process,it may cause the leakage of user privacy data and increase the communication overhead caused by data transmission.Therefore,this paper further proposes a heuristic federated learning based edge caching algorithm.Under the federated learning framework,in each round of training,small base stations train their own models based on local data,and upload the model updates to a central server without uploading the user's own request data,which effectively improves the privacy and security of user data.After receiving the uploaded model updates,the central server aggregates the results by federated averaging and updates the global model.This paper uses stacked auto-encoder to find the potential feature representation of content and users,uses consine distance to measure the potential association between content and users,and finally makes the accurate prediction of content popularity.Simulation results show that the cache-hit-ratio and average download time of the proposed scheme are very close to the ideal performance.In addition,there is a performance tradeoff between cache-hit-ratio and average download time.This means that for the actual caching system,we need to consider the specific application requirements to optimize the system performance.
Keywords/Search Tags:Edge caching, reinforcement learning, federated learning, cache-hit-ratio, average download time
PDF Full Text Request
Related items