Font Size: a A A

Reinforcement Learning-based Joint Optimization Of Communication-Computing-caching For End-edge-cloud Network

Posted on:2024-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:T Q HuangFull Text:PDF
GTID:2558307115489614Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the rapid development of Artificial intelligence(AI)and Internet of Things(IoT),large amount of data processed by AI models can provide intelligent and personalized services for users.Those applications require massive communication,caching,and computing(3C)resources.By caching general data of task close to users,communication overhead can be reduced.By offloading tasks to nodes with abundant computing resources,such as nearby idle terminals,edge servers,and cloud servers,computation latency of tasks can be reduced.In recent years,some researchers have studied on allocating 3C resource to tasks with different requirements to improve network resource utilization and reduce task completion delays.However,most of current researches only focus on a specific type of applications,and cannot be applied to a heterogeneous network with different types of applications.Moreover,most existing researches only consider caching data in edge server,and cannot fully utilize the resources of idle terminals and cloud server.Therefore,we considers joint optimization of 3C resource in end-edge-cloud network,formulate the problem as Markov decision process,aiming to reduce latency,improve network resource utilization.The main research content of this paper is as follows:(1)A deep reinforcement learning-based computation offloading and data caching joint optimization strategy is proposed to address the task scheduling problem in heterogeneous networks of edge-cloud multitasking.By offloading tasks and caching common data or result data to neighboring idle terminals,edge servers,and cloud servers,the proposed strategy reduces task completion latency.Experimental results demonstrate that the proposed algorithm effectively reduces the total task completion delay and the number of unfinished tasks,achieving the highest cache hit rate and maximum throughput with minimal communication overhead.The experimental results show that compared to the alternating direction method of multipliers and the iterative block successive upper-bound minimization algorithm,the proposed algorithm reduces the average task completion latency by 12%and 17%respectively under different Zipf parameters.(2)Then,to address the additional communication overhead caused by the inconsistency between computation and caching nodes in task scheduling,a multiagent deep reinforcement learning-based 3C(computation,caching,and communication)resource joint optimization strategy is proposed.Computational agents and caching agents are deployed to make decisions on computation offloading and data caching,reducing task completion latency,improving caching efficiency,and minimizing communication overhead.Experimental results demonstrate that the proposed algorithm achieves a balance in the consumption of computation,caching,and communication resources in the edge-cloud network.The experimental results show that compared to the twin deep deterministic policy gradient algorithm and the alternating direction method of multipliers algorithm,the proposed algorithm reduces the additional communication overhead by 87%and 81%respectively under different Zipf parameters.
Keywords/Search Tags:3C Resources, End-Edge-Cloud, Data Caching, Computing offloading, Multi-Agent Reinforcement Learning
PDF Full Text Request
Related items