Font Size: a A A

Research On Computation Offloading Based On Reinforcement Learning In Vehicular Edge Computing

Posted on:2023-05-31Degree:MasterType:Thesis
Country:ChinaCandidate:K LinFull Text:PDF
GTID:2532307151482244Subject:Materials engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of communication and computer technologies,vehicles are equipped with advanced sensors and On-board Unit(OBU),gradually changing from personal transport to a new generation of intelligent and connected terminals,which are called as Intelligent and Connected Vehicles(ICV).The development of ICV technology has given rise to varied in-vehicle intelligent applications.These applications,such as autonomous driving,augmented reality and virtual reality,may require high performance of computing resources,which challenges the limited computing capacity of vehicle.To address such problems,computation offloading in Vehicular Edge Computing(VEC)break through the limitations of the traditional cloud platform.It can provide near end service to meet computational needs of such applications through deploying computing resources to the edge side of vehicles.However,there are some problems in different scenarios for computation offloading in VEC,such as communication interruptions,resource competition and malicious attacks.These problems may seriously affect the quality of service for ICV applications.It is crucial to design computation offloading strategies to slove such problems.Therefore,how to design computation offloading strategies for specific scenarios to meet application service requirements is the key of computation offloading in VEC.For the problems of computation offloading in VEC,this thesis designs corresponding computation offloading strategies for different scenarios.There are three scenarios studied in this thesis: computation offloading for single-vehicle in static VEC environment,computation offloading for single-vehicle in dynamic VEC environment,and computation offloading for multi-vehicle in dynamic VEC environment.Furthermore,the specific research content includes the following three aspects.(1)To address the computation offloading problem for single-vehicle in static VEC environment,this thesis takes reasoning task of autonomous driving as specific computation task.Considering the changes of the number of edge nodes,the changes of the dependency structure of reasoning module and the failure of edge nodes in VEC environment,a reasoning module offloading strategy based on reinforcement learning algorithm is proposed in this thesis.The purpose of this strategy is to meet the deadline constraint of reasoning module in the VEC environment subject to malicious attack,and reduce the offloading latency of reasoning module.In detail,considering the dependency relationship among reasoning tasks,this strategy designs a dependency-aware algorithm for reasoning module to calculate the offloading latency.To reduce the offloading latency of reasoning module while satisfying the deadline constraint,the offloading decision for the reasoning tasks is optimised by Q-learning algorithm.The experimental results show that this strategy can quickly find an effective offloading solution while satisfying the deadline constraint.(2)To address the computation offloading problem for single-vehicle in dynamic VEC environment,this thesis takes ICV applications as specific research object.Considering the communication interruptions and the change of vehicle location due to the mobility of vehicles and the dependency structure of ICV applications,ICV computation offloading strategy based on deep reinforcement learning is proposed in this thesis.The purpose of this strategy is to meet the demand of service and improve the range of ICVs by jointly optimizing the offloading failure rate and the total energy consumption during the application offloading process.In detail,this strategy models the computation offloading problem for ICV applications as Markov Decision Process(MDP)model.To jointly reduce the the offloading failure rate and energy consumption,computation offloading decision is optimised by deep Q-network.The experimental results show that proposed strategy can effectively reduce the average offloading failure rate and energy consumption of the in-vehicle intelligent applications with good performance.(3)To address the computation offloading problem for multi-vehicle in dynamic VEC environment,this thesis takes Deep Neural Network(DNN)applications as the research object.Considering the dependency relationship among DNN layers and the time-varying offloading latency,a dependency-aware computation offloading method based on deep reinforcement learning for multi-vehicle algorithms is proposed in this thesis.The purpose of this strategy is to meet the requirements of service by reducing the offloading failure rate of DNN applications.In detail,this strategy models the computation offloading problem in multi-vehicle scenarios as MDP model.To reduce the the average offloading failure rate of ICV,computation offloading decision is optimised by multi-agent deep deterministic policy gradient algorithm.The experimental results show that the proposed strategy can effectively reduce the average offloading failure rate of DNN applications.
Keywords/Search Tags:Internet of vehicle, Computation offloading, Reinforcement learning, Intelligent and connected vehicle
PDF Full Text Request
Related items