Font Size: a A A

Research On Edge Computing Technology Of Internet Of Vehicles Based On Deep Reinforcement Learning

Posted on:2022-07-09Degree:MasterType:Thesis
Country:ChinaCandidate:Y YuanFull Text:PDF
GTID:2492306605471884Subject:Traffic Information Engineering & Control
Abstract/Summary:PDF Full Text Request
The rapid development of the Internet of Vehicles has led to more and more vehicular applications,many of which are delay-sensitive and computation-intensive applications.However,resource-constrained vehicles cannot meet the delay requirements of these applications.Researchers have found that combining edge computing and Internet of Vehicles can effectively solve this problem.Through edge computing technology to offload vehicle tasks,and the resources of the edge server equipped on the roadside unit are used to assist the vehicle to process tasks,which can effectively meet the delay requirements of vehicular applications.Most of the current research on edge offloading of vehicle tasks considers static scenarios,but the offloading of vehicles is a random process in a continuous time.The offloading decision of the vehicle task at each moment changes the edge offloading environment of Io V,and then it causes the impacts for offloading decision of the next moment,which are often not considered in static scenarios.Considering this problem,this paper proposes a dynamic edge offloading model based on the Software Defined network for vehicle tasks.The edge SDN controller centrally dispatches the vehicle tasks.To solve this model,this paper also proposes a particle edge offloading algorithm for vehicle tasks based on Deep Deterministic Policy Gradient.First,the three elements of state,action,and reward in DDPG are determined according to the offloading model,and then the model solution is transformed into a Markov decision process and using this process to solve the offloading scheme of the vehicle task in the dynamic offloading environment.Simulation results show that the algorithm can well adapt to dynamic offloading scenarios in a short continuous time and effectively improve the delay performance of vehicle tasks.Due to the sharing of edge servers,the vehicle can offload tasks to the edge server within its communication range.However,the non-uniform distribution of vehicles leads to problems such as non-uniform task assignment of edge servers,low computing resource efficiency,or excessive load.Most of the current work uses remote cloud or vehicle cloud to solve the problem of the heavier load of edge servers,but there is still a problem of waste of computing resources for edge servers with the lighter load.First,we design a domain and hierarchical SDN Internet of Vehicles network architecture,and the master SDN controller centrally manages and schedules the tasks of the edge server.Then based on this architec-ture,a load balancing model for edge servers is proposed to minimize the load mean square error of edge servers.Then an edge server load balancing algorithm based on Deep Q Network is proposed to solve this problem.This algorithm transforms the load scheme-making problem of multiple edge servers into a single server’s sequential decision process to reduce the complexity of model solving.Subsequently,this paper designs rewards as a feedback mechanism for each server load scheme and converts the minimizing the load mean square error value into a cumulative value that maximizes the reward of each edge server load decision.The simulation results show that the algorithm can significantly improve the resource utilization of edge servers and reduce the processing delay of tasks at the same time.
Keywords/Search Tags:Internet of Vehicles, Edge Computing, Computing Offloading, Deep Reinforcement Learning, Load Balancing
PDF Full Text Request
Related items