| In recent years,with the development of 5th Generation Mobile Communication Technology(5G),mobile devices and media content have increased rapidly,data traffic has grown at an alarming rate,and base stations and core network links are under enormous pressure.In the traditional mode,the method of directly responding to user content requests from the server side can no longer meet the low-latency access requirements of today’s users.As an emerging technology,edge caching caches popular content to edge nodes close to the user end to provide users with nearby network services,thereby reducing backhaul traffic and redundant network traffic caused by users directly obtaining content from cloud data centers.While ensuring service quality,edge caching can provide users with a high-bandwidth and low-latency network environment.This paper studies the regional edge node caching mechanism,and proposes an edge caching algorithm based on the Asynchronous Advantage Actor-Critic(A3C)algorithm,which combines A3 C with edge caching.A3 C has the characteristics of asynchronous training,including two networks of executor and evaluator,and has strong learning ability and adaptive ability in dynamic environment.Through cooperation among nodes and sharing of caching experience,the proposed edge caching algorithm based on A3 C can learn independently and adapt to complex edge caching scenarios,so as to make correct caching decisions.For the edge cache system,the scope of services and the number of users may increase,so the problem of new nodes needs to be considered.Restarting training for new nodes will waste time and cause cold start problems.Considering that users in the service area of adjacent base stations have similar content request characteristics,that is,adjacent edge cache nodes have similar local content popularity.Therefore,transfer learning is introduced in the research of this paper.With the help of transfer learning,it can transfer the characteristics of neural network parameters,reduce the training time of new nodes,and make them put into service faster.With the increase in the number of users and the amount of content,the traditional three-tier architecture of cloud,edge and terminal can no longer meet the demand.Inspired by the group’s regional level-by-level management model,this paper adopts a layered and multi-level mechanism to respond to and graded the end user’s request nearby,so that it does not directly communicate with the data center,which relieves the traffic pressure on the backbone network to a certain extent.In order to evaluate the performance of the proposed Hierarchical Collaborative Edge Cache Architecture(HCECA),a simulation experiment was conducted to compare it with Least Recently Used(LRU),Least Frequently Used(LFU),First In First Out(FIFO)and reinforcement learning algorithm Q-Learing were compared.Experimental results demonstrate the effectiveness of the proposed architecture and algorithm in reducing backhaul traffic,improving cache hit ratio,and solving the cold-start problem.There are 23 figures,2 tables,85 references. |