Font Size: a A A

Research On Content Delivery And Task Offloading For Vehicular Edge Computing

Posted on:2023-06-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z J NanFull Text:PDF
GTID:1522306821475214Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of vehicle applications,the demand for storage and computation resources on the Internet of Vehicles is growing explosively,such as target recognition,path planning,high-definition map,and infotainment.Although cloud computing can alleviate these challenges,its bottleneck is the high latency caused by massive data transmission.Recently,vehicular edge computing has been proposed,which migrates storage and computation resources to the edge of the network close to vehicles.It not only overcomes the problem of limited vehicle resources but also avoids the high latency.It is considered a new computing paradigm to improve the communication,storage and computation capabilities of vehicles.However,in the complex,diverse and heterogeneous Internet of Vehicles scenarios,facing the problems caused by vehicular movement,such as high dynamic network topology,uncertainty,fast time-varying wireless propagation,and limited communication,storage,and computation resources of edge networks,how to design and implement efficient content delivery and task offloading policies to satisfy the quality of service requirements for vehicle applications is still a great challenge.To this end,this dissertation will conduct an in-depth study on user-centric content delivery with delay constraints,task offloading and resource allocation with delay optimization,and task offloading and resource allocation with uncertain result feedback delay.The main work and contributions of this dissertation include the following three aspects:(1)First,we study the user-centric content delivery problem with service delay constraints in the vehicular edge computing scenario in which roadside units deploy caching resources.Specifically,the objective is to minimize the vehicle’s cost under usage-based pricing.The problem of finding an optimal content delivery policy is modeled as a finite-horizon Markov decision process.Since the cache state of each RSU,and the wireless channel qualities between the vehicle and RSUs,are usually unknown to the vehicle a priori,the vehicle must learn the optimal delivery policy by interacting with the environment.To solve this problem,we propose a deep reinforcement learning-based algorithm,which implements dynamic content delivery decisions.Simulation results show that the algorithm reduces the cost of the vehicle by5% and improves the success probability of delivery transmission by 8%.In addition,we found that under the probabilistic caching strategy,high vehicle mobility can improve the data offloading ratio,thus reducing the vehicle’s cost.(2)Then,we study the task offloading decision and resource allocation problem under the dynamic change of network topology caused by vehicle movement in the vehicular edge computing scenario.Specifically,we jointly optimize the task offloading decisions,uplink bandwidth allocation,and computation resources allocation.This problem is formulated as a non-convex mixed integer nonlinear programming to minimize the average delay consisting of task offloading delay,task computation delay,and result feedback delay.For problem-solving,we derive a lower bound of the optimum to this problem,based on which we propose an approximate algorithm.To tackle large-scale scenarios,a low-complexity algorithm is developed based on geometric programming.Simulation results show that the two algorithms can achieve nearly optimal performance,and the difference between them and the optimal solution is no more than 5.4%.Although the first algorithm has better performance than the second algorithm.However,the computational complexity of the second algorithm is much lower.(3)Finally,we study the task offloading decision and resource allocation under dynamic and uncertain vehicular edge computing environment.Specifically,the objective is to jointly optimize the offloading decision,computation resource allocation,and uplink transmission power control of vehicles under uncertain result feedback delay caused by vehicle movement and timevarying backhaul network conditions,so that the time-energy cost of the vehicles is minimized.Solving the problem is particularly hard due to the combinatorial offloading decisions and the strong coupling among resource allocation as well as the uncertain result feedback delay,a deep reinforcement learning algorithm based on the actor-critic structure is proposed.In particular,the actor network can quickly learn the optimal mapping from the input states to the binary offloading decision of each vehicles,and the critic network gives the optimal solution to the resource allocation problem and evaluates the performance under the offloading decision of the actor network output.Numerical results show that the proposed algorithm achieves the optimal performance of 96.6%.
Keywords/Search Tags:Vehicular Edge Computing, Content Delivery, Task Offloading, Deep Reinforcement Learning, Convex Optimization
PDF Full Text Request
Related items