Font Size: a A A

Research On Energy Efficiency Optimization And Multistage Unloading Of SWIPT Mobile Edge Network

Posted on:2024-05-28Degree:MasterType:Thesis
Country:ChinaCandidate:Q M WangFull Text:PDF
GTID:2568307124984909Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the rapid growth of wireless sensor networks,a massive amount of data will be generated by terminals.In order to support lower latency and more timely response for network applications,mobile edge network technology has emerged.However,mobile edge networks have specific requirements for latency and energy consumption,necessitating the development of reasonable computation offloading strategies tailored to the specific network conditions.Moreover,challenges such as power control,energy depletion in edge nodes,and imbalanced energy distribution among nodes still exist in mobile edge networks.Therefore,this paper introduces Simultaneous Wireless Information and Power Transfer(SWIPT)as a solution to these problems.Based on this,a multi-level offloading strategy is proposed to overcome the limitations of traditional ground network communication.Additionally,the incorporation of SWIPT technology increases the difficulty and computational complexity of system decision-making,thus this paper proposes the use of deep reinforcement learning for solving these challenges.The main research contributions of this paper are as follows:Firstly,in order to address the issue of increased node energy consumption caused by the formulation of computation offloading and resource allocation strategies in traditional edge networks,SWIPT technology is introduced to provide energy to the devices.Subsequently,considering beamforming,computation offloading,and power control in SWIPT-based edge networks,a mathematical model is established to optimize the system’s energy efficiency.Furthermore,to overcome the limitations of traditional ground communication networks and improve the system’s energy efficiency,a multi-tier offloading model with unmanned aerial vehicles(UAVs)is introduced,enabling the system to handle a greater volume of computational data and enhance energy utilization efficiency.Secondly,both of the aforementioned system models pose non-convex multi-variable optimization problems,which are challenging to solve using traditional optimization algorithms.Moreover,these problems exhibit high computational complexity,making it difficult to meet the real-time requirements of the system.Therefore,this paper proposes a deep reinforcement learning algorithm.The policy design is as follows: Firstly,the information exchange process is designed to eliminate redundant information in the system environment.Secondly,to address the challenge of dealing with continuous action spaces in traditional Deep Q Networks(DQN),this paper proposes the use of the Deep Deterministic Policy Gradient(DDPG)algorithm for solution generation.Lastly,to mitigate the issue of overestimation in DDPG,this paper eliminates the Critic network and evaluates actions through a designed reward function.Additionally,in the context of the multi-tier offloading strategy,the inclusion of UAVs expands the action space for solving the system’s energy efficiency problem.Therefore,this paper introduces a multi-layer Actor network to enhance action exploration capabilities.Finally,the paper will conduct different simulation experiments on the energy efficiency optimization strategy and the multi-tier offloading strategy.Energy Efficiency Optimization Strategy Experiment:Firstly,the energy efficiency optimization algorithm is compared with the traditional DQN algorithm,demonstrating an improvement in performance of approximately 5%,validating its effectiveness and performance.Secondly,it outperforms the traditional optimal fractional programming(FP)algorithm by approximately 2%,while achieving an average accuracy of 94% compared to the Weighted Minimum Mean Square Error(WMMSE)algorithm.Additionally,it significantly reduces decision-making time,confirming the real-time capability of the proposed algorithm.Finally,the algorithm is tested on a validation set,demonstrating better performance in handling signal interference in the network and effectively addressing power control problems without requiring perfect channel state information.Multi-Tier Offloading Strategy Experiment:Firstly,the effectiveness of the multi-tier offloading strategy is validated through experiments on a training set.Secondly,its performance is compared with various other strategy schemes.Lastly,the algorithm is compared to the single-tier offloading solution of the energy efficiency optimization strategy,showing that the algorithm performs approximately 1.5 times better,indicating superior performance while minimizing the impact on real-time requirements.
Keywords/Search Tags:mobile edge network, multi-stage offloading, deep reinforcement learning, SWIPT, energy efficiency optimization
PDF Full Text Request
Related items