| The Industrial Internet of Things connects massive digital equipment and generates a large amount of data.The demand for computing resources presents an explosive growth,which brings severe challenges to the limited computing resources of terminal equipment.The cloud-edge collaborative computing mode combining edge computing and cloud computing is used to deal with computation-intensive tasks and delay-sensitive tasks generated by industrial Internet of Things devices,which will better meet the needs of system users.When computing tasks are uninstalled,computing,communication,storage,and other resources in the cloud-edge collaborative system will change dynamically.How to reasonably plan the task scheduling strategy,efficiently allocate resources,ensure the reliability of data transmission,and maximize the efficiency of resource utilization has become an urgent problem to be solved.Therefore,this paper focuses on the study of computing offloading and resource joint optimization in the cloud-edge collaborative system.The main contents are as follows:(1)For the computing scenario of the Industrial Internet of Things,this paper constructs a network architecture based on cloud-edge collaborative computing and plans real-time offloading decisions and resource allocation strategies.In this paper,we dynamically allocate communication resources and computing resources of edge nodes for randomly generated computing tasks.In view of the different needs of heterogeneous services in the industrial Internet of Things system,this paper adopts the process scheduling algorithm to adjust the execution order of tasks for the coexistence of different types of tasks in the computing queue and the transmission queue,and on this basis,jointly optimize the system delay and energy consumption.(2)To solve the problem of dynamic changes of system resources in the offloading process of edge computing,the Deep deterministic policy gradients(DDPG)algorithm is used in this paper.The algorithm allocates bandwidth resources according to the proportion of user data in the current time slot transmission queue,allocates computing resources to the tasks that reach the edge node in the current time slot,and optimizes the system performance by training the deep reinforcement learning network.Simulation results show that compared with the benchmark strategy and the Deep Q-Network(DQN)algorithm,the proposed algorithm has faster convergence performance.Under different delay constraints and task generation rates,the task packet loss ratio is the lowest,the average cost loss of the system is the least,and the training tends to be stable.The delay and energy consumption of the system are effectively reduced.(3)Aiming at the problems of computing offloading,load balancing and queuing of heterogeneous tasks in the three-layer architecture of end,side and cloud,this paper uses DDPG algorithm based on Long Short-Term Memory(LSTM)neural network to train the model.The algorithm uses process scheduling strategy to sort heterogeneous tasks in the system to reduce the queuing time,improve the reliability of the system based on time correlation,and predict the load level of the system queue,to achieve the goal of improving the real-time performance of the system offloading decision.Through simulation experiments,it is verified that the performance of LSTM-based DDPG algorithm is better than that of the DDPG algorithm based on process scheduling policy as first-comefirst-served and shortest task-first for heterogeneous task processing in terms of average turnover time,system reward and task packet loss. |