Font Size: a A A

Research On Task Offloading And Resource Allocation Based On Deep Reinforcement Learning In Mobile Edge Computing

Posted on:2022-10-18Degree:MasterType:Thesis
Country:ChinaCandidate:Z ZhaoFull Text:PDF
GTID:2518306731453504Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the sustainable development of technologies such as the Internet of Things,the Vehicle Network and 5G,novel applications such as virtual reality,augmented reality,smart homes,smart grids,and driverless cars are emerging in an endless stream.These applications have the same characteristics and require stronger computing.Capacity,higher bandwidth and lower latency requirements,and these new applications will generate higher energy.Because the resources on the user equipment are limited,it is difficult to meet the delay and energy consumption requirements of new applications.Mobile Edge Computing(MEC)has become an effective solution.The MEC server is deployed near users to provide available resources for user equipment and effectively resolve the problems of limited equipment resources and high demand for new applications.Through the task offload to the MEC to satisfy the needs of computing ability,bandwidth,delay,etc.,to provide users with reliable Quality of Service(Qo S).Therefore,the effective selection of appropriate computing nodes for task offloading and resource allocation has become the main research direction.Based on the successful cases of Deep Reinforcement Learning(DRL)in decision-making problems,this paper proposes a DRL-based task offloading and resource allocation algorithm in the multi-user and multi-server MEC scenario to optimize user task delay And energy consumption.The specific work is as follows:(1)In the multiple tasks on multiple members environment,constructed MEC system model based on DRL,Prioritize users’ tasks and the user’s task priority,local calculation and offload calculation model.Considering multiple factors such as task priority,mobility,delay tolerance,and resource constraints of computing nodes,the increase in task delay and energy consumption relative to local execution is defined as system benefits to minimize delay and Energy consumption is used as an optimization goal to improve system efficiency.(2)A computing node selection algorithm(m KSS)based on the K-Nearest Neighbor(KNN)method is proposed to select the appropriate offload node for users.In the multi-server scenario,when user tasks need to be offloaded to the MEC server,the delay,energy and computing resources of the user tasks are taken into account the factors for the choice of offloading nodes,and the user’s mobility is considered as a factor of the transmission rate.To avoid the overload of the powerful MEC server.(3)Combining the m KSS algorithm with Q-learning(QL)and Deep Q Network(DQN)algorithms,respectively,QL-based task offloading and resource allocation algorithms(m KSSQ)and DQN-based task offloading and Resource allocation algorithm(m KSSDQ).Among them,the problem of task offloading and resource allocation of the MEC system is modeled as a Markov Decision Process(MDP),which sets the state,action and reward of the system respectively.When the user task needs to be unloaded,first select the appropriate computing node according to the m KSS algorithm,and then through the interaction between the agent and the MEC environment,select the computing node to reasonably allocate computing resources for the user task to optimize the delay and energy consumption,and improve the system efficiency.Experimental results show that under different parameters,the algorithm proposed in this paper is significantly better than the three strategies of local unloading,full unloading and random unloading,which reduces the delay and energy consumption,and improves the efficiency of the MEC system.
Keywords/Search Tags:Mobile Edge Computing, Task Offloading, Resource Allocation, Deep Reinforcement Learning
PDF Full Text Request
Related items