Font Size: a A A

Research On Caching Strategy Based On Machine Learning In Mobile Network

Posted on:2021-01-22Degree:MasterType:Thesis
Country:ChinaCandidate:G J ShanFull Text:PDF
GTID:2428330614965739Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
The deployment of cache in mobile network can effectively reduce the delay,improve the transmission rate,solve the congestion problem of backhaul,and unload the traffic at the base station.It is one of the effective technologies to deal with the rapid increase of data traffic of wireless devices.However,compared with the huge Internet content library,the capacity of cache is very small.How to make an effective caching placement strategy to determine which files need to be cached to make full use of the cache is one of the problems that need to be considered when using the caching technology.At the same time,because of the constantly changing file popularity,how to determine the replacement strategy of the cache to adapt to the change is another problem needs to be considered.This thesis mainly studies the cache technology in mobile networks.By predicting the file popularity,considering mobility,sociality and other factors,the cache placement and replacement strategies are formulated.The details are as follows:(1)Considering the heterogeneous network which contains macro base station,small base stations and D2 D communication,a cache placement strategy of small base stations and important users is proposed based on user mobility and sociality.First of all,in the case of unknown user preference(probability distribution of user' requests for different files),machine learning method used to predict user preferences according to their request history.Considering the mobility,physical location relationship and social relationship of users,the expression of average system cost is derived.Taking the cache placement strategy of small base stations and important users as variables,the optimization problem of minimizing the average system cost is formulated.In order to solve the problem of large computational complexity when the number of important users is large,In order to solve the problem of large computational complexity when there are many important users,a suboptimal algorithm based on greedy algorithm is proposed after proving that the objective function is a supermodular function,which greatly reduces the complexity.Finally,the simulation results show that the proposed cache placement strategy can greatly reduce the system cost,and the performance of the suboptimal solution is very close to the optimal solution.(2)In a single base station scenario with unknown local file popularity,the user preference prediction algorithm based on recommendation system and the cache capacity determination algorithm based on the trade-off between cache deployment cost and backhaul revenue.First of all,according to the collaborative filtering and latent factor model,the score matrix is predicted,and then combined with the user activity,the user preference is predicted by using deep learning.Then,since increasing the cache capacity will increase the deployment cost of the operators while reducing the cost of the backhaul,in order to realize the tradeoff,the optimization problem of the operators to maximize the revenue is formed,and the optimal cache capacity and the corresponding caching placement strategy are solved.Simulation results show that compared with the caching placement strategy for global popularity,the strategy based on local popuarity can improve the cache hit ratio more effectively.(3)Considering a single base station scenario where the local popularity is dynamic unknown.A cache replacement strategy based on deep reinforcement learning is proposed to maximize the cache hit times.Firstly,the problem of cache replacement is modeled by Markov decision process.Specifically,the current cache contents and request file are taken as the system state,the cache replacement strategy is taken as the action,and the reward function is designed to achieve more cache hit times.The deep reinforcement learning based cache replacement decision model is constructed,and the cache replacement strategy is designed based on A3 C algorithm.The simulation results show that increasing the number of agents in A3 C can accelerate convergence,and the proposed cache replacement strategy can achieve higher cahce hit rate compared with FIFO,LRU,LFU and other traditional cache replacement strategies.
Keywords/Search Tags:Caching, Heterogeneous Network, Machine Learning, Integer Programming, Greedy Algorithm, Popularity Prediction, Deep Reinforcement Learning
PDF Full Text Request
Related items