Nowadays,the development of society leads to the generation of a lot of computation-intensive data at the end of electronic devices.For this reason,cloud computing provides a good processing platform,which can effectively alleviate the data cache at the end of some users’ devices.However,cloud computing also has its own defects.If all data is dispatched to the cloud computing platform for processing,it will inevitably lead to network congestion,excessive energy consumption and time delay.Given the aforementioned challenges,the concept of an edge computing framework has been proposed.By marginalizing computing resources on the cloud,the distance between them and users is shortened,effectively alleviating network congestion and transmission problems.Compared with cloud computing,when users process a small amount of data,they can dispatch it to the edge device for processing.When facing complex data,they can choose the edge end to cooperate with the cloud for processing.In this process,the delay and energy consumption can be effectively controlled and constrained.Therefore,this article mainly takes energy consumption and time delay in task scheduling in edge computing as research objectives,and uses deep reinforcement learning algorithm to optimize the objectives.This article designs two methods for task scheduling strategy formulation in different edge computing scenarios.The main work is as follows:(1)This article presents an edge computing scenario that consists of three parts,namely,the user terminal,the task scheduler,and the edge cloud.According to the designed calculation scenario,the relevant indexes of energy consumption and delay are constructed to solve the multi-objective optimization problem of both.Then,a new Improved Multi-objective Task Scheduling Dueling Double Deep Q Networks(IMTS-D3QN)algorithm is proposed.Then,according to the above scenarios,appropriate state,action space and reward function are designed for the algorithm to facilitate the optimization of the target.Finally,the Pareto optimal solution is obtained through different linear weighted combinations to minimize the response time and energy consumption.Simulation results show that the proposed algorithm is superior to other algorithms in energy consumption and delay optimization.(2)Unmanned Aerial Vehicle(UAV)assisted edge computing can improve the agility of computing resources in order to solve the problems of fixed position,coverage and communication quality of edge servers in wireless networks.Therefore,a multi-layer computing framework is proposed,which consists of UAV side,edge server side and cloud side.Based on different task requests of user terminals,reasonable task scheduling policies and resource allocation policies are implemented to construct the delay,energy consumption,and task scheduling success rate indicators.Improved Softmax Deep Double Deterministic Policy Gradients(IMSD3)based on policy gradient is selected.Then,According to the above scenarios,appropriate state,action space and reward function are designed for the algorithm to optimize the target.Simulation results show that IMSD3 algorithm has better optimization effect than other reinforcement learning algorithms in energy consumption,delay and task scheduling success rate.In summary,by designing edge computing in different scenarios,it can be seen that the proposed algorithm can reduce energy consumption and delay in scenarios to some extent and improve user experience quality,which has certain significance for task scheduling research in edge computing environment. |