| With the increasing number of vehicles in today ’s society,the traffic environment is becoming more and more complex.In order to efficiently manage large-scale vehicles,the research and development of vehicle networking has been highly valued.Due to the characteristics of mobility and limited resource conditions,the Internet of Vehicles based on cloud computing cannot meet the needs of low latency and high reliability.In order to solve the above problems,edge computing adopts the method of near terminal deployment to solve the problem of insufficient terminal resources and large cloud service delay.Therefore,how to reasonably carry out edge service deployment and caching has been widely concerned by researchers,but most studies have insufficient consideration for the uneven distribution of vehicles in reality,which is particularly obvious in different road sections.This difference in vehicle distribution determines that the distribution of service requests is also different,and this feature cannot be ignored in the service deployment process.On the other hand,in the actual edge computing scenario of the Internet of Vehicles,in order to meet the needs of users,a single service processing result often cannot meet the needs,but requires multiple services to cooperate to complete the processing.By caching the associated services to the same edge server or adjacent servers in advance,the transmission delay caused by data interaction is avoided.Therefore,this paper studies the on-demand deployment of edge servers and service caching in the Internet of Vehicles environment.The main research contents include :(1)Aiming at the problem of uneven distribution of service requirements,an ondemand deployment strategy of vehicular edge computing is proposed to achieve the goal of low average response delay of vehicles requesting services from edge servers.The problem of service deployment and resource allocation is transformed into an integer nonlinear programming problem,and a deployment and resource allocation algorithm based on deep reinforcement learning is proposed.The simulation results show that this method can reasonably deploy and allocate computing resources in areas with dense and sparse service requests.Compared with the benchmark algorithm,this method can reduce the average response delay by about 4 %-19 %,and can solve the problem caused by uneven requests.(2)In order to reduce the load of edge servers,a cooperative caching strategy for vehicular edge computing services is proposed to reduce the average response delay of related services.This method directly mines the relationship between services based on historical service request information,uses ARIMA(integrated moving average autoregressive)model to predict future service requests,establishes service invocation delay model and energy consumption model,and proposes service collaborative caching and resource allocation algorithm based on deep reinforcement learning to reasonably allocate computing resources of edge servers.The simulation results show that compared with other algorithms,this method can not only effectively reduce the data interaction delay and service invocation delay of relational services,but also significantly reduce the overall energy consumption of service cache. |