Font Size: a A A

Computation Offloading In Mobile Edge Computing System Based On Game Theory And Reinforcement Learning

Posted on:2021-04-06Degree:MasterType:Thesis
Country:ChinaCandidate:M D YuFull Text:PDF
GTID:2480306503473554Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
With the increasing popularity of smartphones,more and more new mobile applications such as interactive games and augmented reality have emerged.Such mobile applications usually require a large amount of computing resources and high energy consumption.However,due to the limitation of physical size,mobile devices often have limited computing resources and battery life.Mobile Edge Computing(MEC)provides a solution to the tension between computation-tensive applications and resource-constrained mobile devices.The main service approach of mobile edge computing is computation offloading.Terminals with limited capabilities offload heavy computation tasks to a cloud environment with sufficient resources in order to solve the lack of storage and computing resources of mobile terminals.Although in the current academic community a lot of research has been carried out on computation offloading,two problems remain to be solved.First,the selfishness of terminal devices and MEC server needs to be considered.How to perform coordinated computation offloading when the goals are not unified;Second,there are many MEC servers in different locations in an ultra-dense network.Collaborative computation offloading makes too much signaling costs.This paper proposes solutions to these two problems.Firstly,a distributed computing offloading algorithm is designed for the multi-user single MEC server scenario.Users determine whether to perform computation offload according to wireless and computing resources.Their purpose is to minimize their own delay of task completion.The MEC server allocates computing resources to users based on the offloading status of users,and its purpose is to minimize the task completion delay of the total system.This paper introduces the Stackelberg game theory to solve the problem of inconsistent goals between users and MEC servers.Users and MEC servers continue to play games to reach an equilibrium state.Through simulation experiments,we can see that the distributed computing offloading algorithm proposed in this paper can effectively reduce the delay overhead of users.Secondly,for the ultra-dense network of multiple users and multiple MEC servers,this paper introduces reinforcement learning,improves the traditional Multi-armed Bandit(MAB)algorithm based on the network environment,and proposes a distributed task offloading algorithm based on MAB.In this distributed decision-making scenario,no additional central control node performs unified scheduling,and the user decides the target MEC node for the current task offloading according to the task completion results of each MEC node in the past,in order to minimize the average completion delay of the user task.The average cumulative regret upper limit of this algorithm is obtained through mathematical methods.Finally,simulations show that the improved MAB algorithm can achieve a lower average task completion delay.
Keywords/Search Tags:mobile edge computing, computation offloading, Stackelberg game, ultra-dense network, reinforcement learning
PDF Full Text Request
Related items