Font Size: a A A

Research On Computation Offloading In Partial Observation Edge Computing

Posted on:2022-05-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:S N SongFull Text:PDF
GTID:1488306332962239Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
As an extension of cloud computing,edge computing connects to the user equipment through high-speed networks,placing computing resources closer to users at the edge of the network.The high-speed communication network and position advantage save system costs and improve service quality for edge computing.Users can offload local tasks to edge servers to obtain faster and more stable computing services.However,compared with server clusters in cloud computing,edge servers are usually distributed in different areas.The loose cooperation among servers limits the centralized and distributed system management: A single server hardly obtains global system information in real-time and partially observes the system environment.Besides,user privacy,system security,and communication costs also embody the reason for partial observation in the edge.Partial observation leads to errors or lags in the system's estimation of task attributes and user behaviors.Conventional computational offloading and system optimization methods,including an intelligent offloading model with deep reinforcement learning,are facing challenges.We focus on the offloading process and system resource constraints from user and server partial observation in edge computing.We first explore the complex and unknown environmental information through Deep Reinforcement Learning(DRL)in a single edge server scenario.Then,we proposed a distributed modeling and decentralized learning framework for multiple edge servers offloading to improve the learning efficiency of offloading strategies and reduce communication costs among servers.The contributions of this research are as follows:(1)We propose a prototype system and the corresponding definition of the semionline offloading algorithm for modeling the partial observation edge computing system.In the system design,no edge server or user equipment can obtain the entire system's information and makes an offloading decision independently.We analyze system resource changes on user tasks' Age of Information(Ao I)based on pipeline model,and propose offloading methods for partial observation among users.System analysis and simulation experiments show that users obtain indirect observations of the system environment through the server is the key to improving computational offloading efficiencies,such as predicting the value of offloading tasks and the utilization of resources idleness.(2)We study the computing offloading problem based on user behavior prediction for the partial observation between users and servers.We use The Markov decision process to describe the user behavior and propose the so Co M model.Relying on user behavior prediction,so Co M can complete the offloading algorithm's training selfadaptively,which reasonably allocates computing resources and improves system efficiency.As the number of users increasing,the diversity and partial observation of user behaviors will cause data space explosions,making it difficult for DRL to learning offloading methods.Through the research and analysis of popular deep reinforcement learning methods,we choose Dueling DQN as the core method of the model.The experimental results show that the so Co M with Dueling DQN can effectively predict user behaviors.The resulting offloading algorithm can improve system resource utilization and balance server load.(3)We investigate the offloading of deadline-aware tasks and the optimization for partial observation among multiple servers.Deadline-aware tasks need to complete within a limited time.With partial observations,the binary judgment of whether the task is completed brings huge observation noise.The mutual exclusion between system throughput and resource utilization brings obstacles to exploring effective deep reinforcement learning strategies.The research uses a decentralized partially observable Markov decision process to replace the traditional Markov model,draws on the concept of strategic distillation,and proposes an offloading model called Fast Decentralized Reinforcement Learning Distillation(Fast-DRD).Fast-DRD completes the filtering of environmental noise and error exploration with low computational complexity and reduces the over-fitting of deep reinforcement learning in noisy environments.Inspired with Gossip Protocol,Fast-DRD complete self-learning without relying on prior knowledge.In a system composed of multiple edge servers,Fast-DRD supports the completion of learning in a self-organizing decentralized model and deploys the offloading methods more flexibly.Compared with naive Policy Distillation,Fast-DRD reduces communication and computing costs during the learning procedure.The learned offloading model ensures the success rate of task offloading and avoids network and server resource bottlenecks.
Keywords/Search Tags:Edge Computing, Task Offloading, Partial Observation System, Dectralized Learning, Deep Reinforcement Learning
PDF Full Text Request
Related items