Font Size: a A A

Content Delivery And Intelligent Caching Strategies In Wireless Networks

Posted on:2019-04-27Degree:MasterType:Thesis
Country:ChinaCandidate:N F ZhangFull Text:PDF
GTID:2428330590492364Subject:Major in Electronic and Communication Engineering
Abstract/Summary:PDF Full Text Request
In recent years,mobile cellular network traffic has grown exponentially,and the driving force behind it has essentially changed from traditional ”connection-centric” services(such as telephony and text messaging)to ”content-centric” services(such as video streaming and content sharing).Proactive content caching at the edge of the wireless network is an effective way to ease the traffic burden,reduce content access delays and improve the user experience.However,the gain promised by the classic coded caching will be reduced due to the limited file size in practical systems,and the static cache placement scheme cannot cope with the dynamic content popularity and user demand.This thesis proposes an efficient content distribution scheme for decentralized coded-caching networks with finite file packetization,and designs intelligent caching strategies for wireless caching networks with dynamic content popularity and user demand.The main research results are as follows:1.For caching networks with finite file size,we propose a new delivery scheme for decentralized cache placement in order to minimize total network traffic load.The main idea is to characterize the fitness level of each pair of mergable packets.The optimization problem minimizing network traffic load can be equivalent to minimizing total misfit.Based on the misfit function,we choose the packet pair with the minimum misfit value for coded multicasting in a greedy and sequential manner.Numerical results show that the proposed scheme attains the lowest traffic load among all existing schemes at all values of file size and cache size with polynomial complexity.2.For a wireless network with a single cache node,we propose a new linear prediction model,named grouped linear model(GLM)to estimate the future content requests based on historical data.Unlike many existing works that assumed the static content popularity profile,our model can adapt to the temporal variation of the content popularity in practical systems due to the arrival of new contents and dynamics of user preference.Based on the predicted content requests,we then propose a reinforcement learning approach with model-free acceleration(RLMA)for online cache replacement by taking into account both the cache hits and replacement cost.This approach accelerates the learning process in non-stationary environment by generating imaginary samples for Q-value updates.Numerical results based on real-world traces show that the proposed prediction and learning based online caching policy outperform all considered existing schemes.3.For a wireless network with multiple cache nodes,we propose a novel distributed approach for the cache placement,based on collaborative multi-agent reinforcement learning using local observations.Each user can acquire its requested content from its local cache directly,or from its neighboring devices via D2 D communications,or from the base stations.We formulate an optimization problem to minimize the average download delay subject to the BS and user's storage capacities.Since the user is mobile and the content popularity is unknown,the optimization problem is intractable.In order to find the approximate optimal solution,we develop a multi-agent reinforcement learning based distributed algorithm to perform distributed caching without a central coordinator.Multi-agent framework is efficient and requires little communication overhead.The multi-agent framework reduces the space of state and action and improves the efficiency of the algorithm.Edge-based Q-function decomposition achieves cooperation between cache nodes,hence reduces communication overhead.Simulation results show that the proposed distributed algorithm can significantly reduce the average download delay compared to popularity based caching.As the algorithm gradually converges,the average download delay performance approaches that of the centralized greedy algorithm.
Keywords/Search Tags:Coded caching, multicasting, reinforcement learning, D2D caching, multi-agent
PDF Full Text Request
Related items