Font Size: a A A

Research On Strategies For Resource Collaborative Vehicle Task Offloading Based On Deep Reinforcement Learning

Posted on:2023-06-26Degree:MasterType:Thesis
Country:ChinaCandidate:J YuFull Text:PDF
GTID:2532306845499524Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The Internet of Vehicles is one of the important technologies to promote the development of intelligent transportation system.With the help of emerging technologies such as 5G and Artificial Intelligence,the market of vehicular networks has derived a large number of delay-sensitive and compute-intensive applications.These applications can increase driving safety and improve the current traffic state,however,they have brought severe challenges to the computing,storage and communication capabilities of vehicles.In order to alleviate the above contradiction,Mobile Cloud Computing and Mobile Edge Computing(MEC)have been introduced into vehicular networks to provide computation offloading services.Then,the effective computation offloading scheme is of great significance to reduce delay and improve the quality of service.Most of the existing works focus on the researches on the edge computation offloading problems because the resources of MEC servers are closer to vehicles which can achieve low delay and high reliability.However the resources of MEC server are relatively limited and it is difficult to deal with the scenario with intensive requests.Therefore,this thesis makes an in-depth study on the computation offloading problems in vehicular networks from the perspectives of multi-MEC horizontal cooperation and edge-cloud vertical cooperation,respectively.In order to realize the effective integration and cooperation of servers’ resources,the vehicular computation offloading architecture based on Software Defined Network is constructed first.Next,this thesis proposes the effective offloading decision-making mechanisms based on the deep reinforcement learning for two specific scenarios of multi-MEC horizontal cooperation and edge-cloud vertical cooperation,which can make offloading strategies realtimely to adapt to the dynamics of vehicular networks and improve the performance of computation offloading.The specific research work is summarized as follows:(1)The joint optimization problem of computation offloading and load balancing in multi-MEC horizontal cooperative vehicular networks is studied.Due to the limitation of resources of MEC servers and the uneven distribution of computing requests among MECs,it is very important to design an effective computation offloading strategy and coordinate the resources of multiple MECs to achieve load balancing and improve resource utilization.This thesis first determines the communication model,computing model and load balancing model,then constructs the constrained optimization problem with the goal of minimizing the average cost of the system.In order to determine the offloading target server,resource allocation scheme and task offloading ratio,a multi-step strategy is proposed.Firstly,a joint optimization algorithm of computation offloading and load balancing is designed based on deep reinforcement learning,which can comprehensively consider all task requests and the resource distribution of the servers,and determine the offloading target server for the vehicle in real time.On the basis of definite offloading strategy,a single-server resource allocating scheme based on task priority and a method of solving offloading ratio are proposed successively.The resource allocation and offloading ratio are used to calculate the real-time reward to back train the offloading decision model.The simulation results show that the proposed scheme can not only balance the load,but also greatly improve the performance of delay and task completion ratio compared with other baseline algorithms.(2)The offloading decision optimization problem in edge-cloud vertical cooperative vehicular networks is studied.The powerful computing power of cloud server can be used to alleviate the problem caused by the limitation of resources of edge servers,so the task offloading strategy which can coordinate the resources of cloud and edge servers is very important.Considering the differences among the properties of different tasks,this thesis first prioritizes tasks according to their demands for delay and computing resources,and takes the priority as one of the important guidelines for offloading node selecting and resource allocating.Secondly,factoring in the priority,delay and computing resource cost,the maximization problem of the average utility of tasks is constructed,and then we formalize it into Markov Decision Process,i.e.,design state space,action space and reward function.On this basis,an adaptive offloading decision mechanism based on deep reinforcement learning is proposed.The simulation results show that the average utility of proposed algorithm is more than doubled,and the completion ratio of tasks and higher-priority tasks are improved at least 20%,compared with the two strategies of All Offloading and Average Resource Allocation.
Keywords/Search Tags:Internet of Vehicles, Computation Offloading, Priority, Resource Allocation, Deep Reinforcement Learning
PDF Full Text Request
Related items