Font Size: a A A

Research On Computation Offloading And Cache Strategies Based On Deep Reinforcement Learning

Posted on:2022-12-09Degree:MasterType:Thesis
Country:ChinaCandidate:K K DingFull Text:PDF
GTID:2518306776953009Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of the Internet of Things,the number of devices and emerging applications has surged,and the explosive growth of network traffic and computing demand has brought huge load to the cloud center.By deploying communication,computing,and storage resources closer to users,it can reduce the traffic on the backbone network and the load on the cloud center to provide rapid response to users' requests.Compared with cloud centers,edge computing servers have limited communication,computing,and storage resources and cannot meet all user requirements.To make full use of network resources,a device-to-device(D2D)link can offload users tasks to adjacent terminals with abundant computing resources,reducing the load on edge servers and communication costs.When the computing task is large,it can be divided it into multiple subtasks and then the subtasks can be offloaded separately.However,most of the existing computational offloading methods for subtasks neglect the dependencies between subtasks,so this paper proposes a hybrid offloading mechanism for subtasks to reduce the overall task completion delay.By caching common computing data,the task data transmission delay can be reduced during the offloading process,which relieves the wireless link pressure and reduces the task completion delay.Therefore,on the basis of subtask hybrid computing offloading,this paper proposes a subtask computing cache strategy to make caching decisions for subtasks,so as to reduce the total latency of the network.The main contents of this paper are as follows:1)Considering subtasks dependencies in most offloading strategies are rarely considered,thus affecting the overall task completion delay.This paper proposes a Deep Reinforcement Learning-based Sub-Task Computing Offloading strategy(SCOS).The effect of subtask dependency on task completion delay was taken into account to reduce the overall task completion delay.After user tasks are divided into subtasks,the dependencies of subtasks are modeled as directed acyclic graphs.The agent of deep reinforcement learning selects the optimal processing node for the subtasks according to the subtask information,task dependencies and network resources,so as to reduce the overall task completion delay.Task processing nodes include local,neighbor idle terminal,and edge server.Experimental results show that compared with other offload strategies,the proposed subtask computational offload strategy can not only reduce task completion delay,but also reduce terminal energy consumption.2)In order to make full use of network resources and reduce the overall network delay,based on subtask calculation and offloading,A Deep Reinforcement Learningbased Sub-Task Computing Caching strategy(SCCS)is proposed to reduce the total latency of network.According to task information,network resources,data cache status and task popularity,the agent makes offloading and caching decisions for subtasks to realize joint optimization of cache and computing resources.The cached universal data can provide services for other user tasks?reduce the task transmission delay and reduce the task's demand for computing resources.Experimental results show that compared with other computational cache strategies,the computational cache strategy proposed in this paper can improve cache hit ratio and reduce network delay.
Keywords/Search Tags:Edge computing, Deep Reinforcement Learning, Computing offloading, data cache
PDF Full Text Request
Related items