| With the rapid development of Internet of Things(Io T)technology and 5G networks,fog computing has emerged as a new edge computing paradigm that specializes in handling computation,storage,and network resources closer to users’ edge devices.This approach offers the benefits of reducing data transmission latency,decreasing energy consumption,and enhancing resource utilization efficiency.To address the problem of scarce computing resources in the user’s equipment,fog computing has been widely studied in tandem with computation offloading techniques.However,there is still a lack of comprehensive consideration of sustainable energy provision,rational network resources allocation,and collaborative caching between the cloud and fog computing.Therefore,this thesis proposes an intelligent offloading and resource allocation scheme that is based on fog computing.The key contributions are summarized as follows:1)Intelligent offloading for fog computing with caching management: With the explosive growth of terminal data,an increasing number of terminal devices are demanding higher computing resources and energy efficiency.Therefore,traditional computation offloading methods cannot meet the low-energy requirements of these devices.To solve this problem,a fog computing intelligent offloading mechanism with caching management is proposed.Specifically,an optimization problem that minimizes energy consumption is formulated by jointly considering offloading decision,caching decision,channel bandwidth ratio,and computing resource allocation ratio of fog node to ensure that all computing tasks in the terminal device layer can be completed.To solve this problem,an intelligent caching offloading algorithm based on chaotic bat is proposed.This algorithm incorporates chaotic sequences and random mutation strategies into the traditional bat algorithm,enhancing its scalability and flexibility,and improving the global convergence and local search capabilities of the algorithm.Finally,simulation results show that the proposed algorithm can quickly converge to significantly improve system performance,demonstrating superior performance advantages over traditional algorithms.2)Deep reinforcement learning-based cloud-fog collaborative computation offloading and caching: In order to further meet the demands of delay-sensitive and computation-intensive applications,and to reduce the transmission latency and energy consumption of Io T devices during task processing,a cloud-fog collaborative computation offloading and caching mechanism based on deep reinforcement learning is proposed.In this thesis,we propose a cloud-fog collaborative computation offloading and caching mechanism based on deep reinforcement learning.Specifically,based on computing resources,bandwidth,task caching and computation offloading,an optimization problem is formulated to minimize the weighted sum of task execution delay and energy consumption.In order to solve this problem,a deep reinforcement learning-based computation offloading,caching strategy and resource allocation algorithm is proposed.This algorithm utilizes experience replay and soft update algorithms to improve the learning efficiency of the neural network.With the help of an indicator function,which converts discrete actions into continuous ones,this algorithm overcomes the limitation of traditional policy gradient algorithms that cannot simultaneously solve continuous and discrete actions.Additionally,it leverages both cloud and fog caching capabilities,combined with the allocation of bandwidth and computing resources,to make the optimal offloading decisions and caching strategies.Finally,the simulation results show that the proposed algorithm has the rapid convergence rate and reduces the system cost significantly,compared with other schemes.3)Intelligent computation offloading with energy harvesting and resource allocation: In order to meet the demand for sustainable energy supply of rechargeable devices in Io T scenarios and to achieve the goal of extending device lifespan and reducing latency and energy consumption,this thesis proposes an intelligent computation offloading mechanism by integrating energy harvesting and resource allocation.Specifically,an optimization problem that minimizes the weighted total cost of task completion delay and energy consumption is constructed by optimizing the fog node offloading decision,bandwidth resource,computing resource,and power allocation of energy harvesting.To effectively solve this type of mixed-integer nonlinear programming problem,an intelligent offloading algorithm integrating energy harvesting and resource allocation is proposed.This algorithm is based on the twin delayed deep deterministic policy gradient,and integrates a dual-critic network architecture to solve the problem of Q-value overestimation.It can avoid the impact of random noise on a single critic network,thus improving the stability of algorithm learning and having better global search capability.In addition,the introduction of a probability function and uniform noise improves the performance of the algorithm.Finally,simulation results show that the algorithm has good convergence effect,and achieves significantly performance advantage and the lowest overall system cost compared with other offloading schemes. |