Font Size: a A A

Research On Computation Offloading In Cloud-End Fusion

Posted on:2020-05-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:L LinFull Text:PDF
GTID:1368330599461826Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Cloud-end fusion computing is the product of big data,and it is now a mainstream com-puting paradigm combined of multiple computing forms.Its development has gone through two phases,forming two different architectures:cloud/client architecture for the conver-gence of mobile and cloud computing;cloud-edge-client architecture for the integration of end devices,edge nodes,and data centers as the emergence of edge computing.In the cloud-end fusion,computation offloading is an important computing form,in which end devices migrate part of the computing to powerful facilities(edge nodes or cloud servers)to achieve computing speedup and energy saving.Computation offloading is a complex technique in-volving task partitioning,runtime decision,and collaborative task scheduling between edge nodes and the cloud.Moreover,computation offloading has different challenges in different cloud-end fusion architectures,mainly in the following aspectsIn the cloud/client fusion,existing computing offloading frameworks provide some general methods for task partitioning and execution,but the implementation of mechanisms for specific applications(such as cloud gaming)needs to be solved.This implementation is tightly integrated with the characteristics of applications,and it is summarized as designing the task partitioning mechanism according to the application execution flow,selecting the appropriate offloading granularity,and optimizing the data transmission between the cloud and end devices to reduce the delay.However,there is a lack of research related to this topic.In the cloud-edge-client fusion,the primary task of computation offloading is to build a framework that adapts to the architecture of cloud,edge,and end devices,and the offload-ing decision,that is,to decide whether the task should be offloaded and where it is offloaded Existing computation offloading frameworks often rely on end devices to determine whether the task should be offloaded according to the runtime supply and demand of resources.This way will result in“greedy”demand and disorderly competition for resources,and it is not suitable for devices with relatively limited resources like edge nodes,as it will cause per-formance degradation for multitasking.In addition,as the deployment of edge nodes is self-organizing,loosely coupled,and widely distributed,it challenges the load balancing in such a decentralized architecture.Besides,there are many types of offloading applications,and different applications have different latency characteristics(latency-sensitive or latency-tolerant).In the cloud-edge-client fusion,it is a new challenge to design a task scheduling strategy that adapts to the latency characteristicsTo overcome the above research challenges and follow the architecture evolution of cloud-end fusion,we put effort into task partitioning for specific applications,dynamic of-floading decision-making,and distributed task schedulingIn the cloud/client fusion,for the offloading mechanism of a specific application like the typical cloud gaming,LiveRender is proposed based on fine-grained computation of-floading.LiveRender re-divides the execution flow of the game,separates the game code execution and screen rendering,and migrates the code execution to the cloud to build a cloud gaming system based on "graphics stream".To optimize data transfer between the cloud and the client to reduce response latency,LiveRender implements three data compression mech-anisms:graphics intraframe compression,graphics interframe compression,and graphics data caching.Experimental results show that LiveRender can save 58%of bandwidth and reduce 77%response delay compared to traditional "video streaming,,cloud gamingIn the cloud-edge-client fusion,for the offloading framework and decision-making mechanism,a framework that is aware of three-tier architecture consisting of end devices,edge nodes,and the cloud,called Echo,is proposed.Echo lets edge nodes to make offload?ing decisions.Specifically,an edge node predicts the completion time of a task based on a preemptive scheduling algorithm that provides QoS(Quality of Service)and then places the task on the platform(edge nodes,the cloud or end devices)having the shortest comple-tion time according to the estimated value.Additionally,Echo implements mechanisms of computation offloading such as programming model,edge node discovery,automatic anal-ysis of application code,and data transfer optimization based on object compression.The experimental results show that Echo achieves a significant reduction in the average latency and the energy consumption of end devices compared to traditional computation offloading frameworksIn the cloud-edge-client fusion,for multi-task and multi-node task scheduling in com-putation offloading,a distributed and latency-aware task scheduling mechanism,called Pe-trel,is proposed.First,Petrel implements a load balancing strategy based on "random sam-pling",which improves system performance and reduces scheduling overhead.Then,an adaptive latency-aware scheduling algorithm is designed to cope with the difference of appli-cation latency characteristics.It adopts a "greedy" strategy for latency-sensitive applications and a "best-effort" strategy for latency-tolerant applications.The experimental results show that Petrel achieves a significant performance improvement compared to existing scheduling strategies in terms of the average speedup,the maximum task completion time,and the task completion ratioIn summary,to implement a low-latency computation offloading in the cloud-end fu-sion,this paper proposes some novel mechanisms,including a task partitioning for a specific application with the latency optimization,a three-tier framework with the decision-making mechanism,and a distributed and latency-aware task scheduling.These mechanisms over-come the deficiencies of existing works from different aspects.
Keywords/Search Tags:Cloud-end Fusion, Computation Offloading, Cloud Gaming, Offloading Decision, Task Scheduling, Load Balancing
PDF Full Text Request
Related items