Font Size: a A A

Research On Edge-cloud Collaborative Computing Framework And Offloading Strategy For Deep Learning Applications

Posted on:2022-10-25Degree:MasterType:Thesis
Country:ChinaCandidate:H GaoFull Text:PDF
GTID:2518306542963809Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the development of deep learning technology and the popularization of end devices,deep learning applications are widely run on end devices.Deep learning applications have powerful data analysis functions that can process massive amounts of data generated by end devices and extract effective information,in order to realize the intelligence of end devices.As a resource-consuming task,deep learning applications are currently mainly deployed and executed in two ways: one is deployment and execution based on cloud servers,and the other is deployment and execution based on end devices.Based on the deployment and execution of cloud servers,the massive data generated by the end device is sent to the cloud server,which will bring high transmission delays and it is difficult to meet real-time requirements.Based on the deployment and execution of end devices,the complete deployment of complex deep learning models on end devices with limited computing power and battery capacity also results in high latency and cannot meet the user’s needs for low energy consumption and long battery life of end devices.Due to the limitations of network transmission and end devices performance,the current two computing modes cannot guarantee the real-time and low energy consumption requirements of end devices in deep learning applications.Therefore,how to reduce the response time of deep learning applications and the energy consumption of end device,and thereby optimize Quality of Service(QoS)of users is a problem worthy of research.In recent years,edge computing as an emerging computing Paradigm,and it adds edge servers between cloud servers and end devices,so that computing tasks can optionally be offloaded to edge servers for execution.Because edge servers are closer to users,edge computing can provide faster response speed and interactive capabilities,which can reduce latency and improve network stability.Therefore,this thesis introduces edge computing to provide computing and storage capabilities at the edge of the network,forming an execution architecture of multiple computing resources of “end-edge-cloud” in the edge computing environment,and to study how to use edge computing to meet the QoS requirements of users in the deep learning applications.The main work of this thesis is as follows:1.For deep learning applications,how to reduce the response time of the application to optimize QoS.First,this thesis introduces a new computing mode edge computing.In the edge computing environment,effective collaboration between end devices,edge servers and cloud servers is essential.Then,this thesis considers the different characteristics of multiple computing resources of “end-edge-cloud”,and proposes a Collaborative Framework of Edge Computing for Deep Learning Applications(Edge4DL).Finally,this thesis builds a real edge computing environment,using the deep learning application of face recognition to locate and confirm the goods receiver in the receiving scene of UAV delivery as a case study,the experiments prove that the collaborative framework Edge4 DL can significantly reduce response time and network traffic.2.For deep learning applications,how to reduce the energy consumption of end devices while ensuring time constraints to optimize QoS.Based on the multiple computing resources of “end-edge-cloud” included in the collaborative framework Edge4 DL,this thesis further considers that Deep Neural Networks(DNNs)have the characteristics of multi-layer structure and feature extraction,and proposes an energy efficient task offloading strategy for DNNs computing task in edge computing environment.This strategy uses search layer of DNNs as a basic unit to divide the computing tasks in a single deep learning application.When the computing tasks are offloaded,the multiple computing resources of "end-edge-cloud" are together considered.First,this thesis establishes a time and energy consumption evaluation model for DNNs computing task offloading in an edge computing environment,and designs a fitness function for evaluating end devices energy consumption under response time constraints based on this model.Then,according to the new strategy,this thesis proposes the Multi-Resource Task Offloading Based Particle Swarm Optimization(MRPSO)in the edge computing environment.Finally,this thesis proves through experiments that compared with the existing task offloading strategy,the particle swarm scheduling algorithm MRPSO corresponding to the new strategy has the lowest fitness value,that is,the energy consumption value of the end devices in deep learning applications is the lowest under response time constraint.This thesis introduces edge computing and studies how to collaborate the multiple computing resources of "end-edge-cloud" to meet the QoS requirements of users in deep learning applications.This thesis proposes a Collaborative Framework of Edge Computing for Deep Learning Applications(Edge4DL),and builds a real edge computing environment,and combined with actual cases to verify that the proposed framework can effectively reduce response time and network traffic.On the basis of this framework,further considering the characteristics of DNNs with multi-layer structure and feature extraction to divide and offload them,the Multi-Resource Task Offloading Based Particle Swarm Optimization(MRPSO)in edge computing environment is proposed.Finally,the experiment proves that MRPSO can fully reduce the energy consumption of end devices under response time constraint of users.
Keywords/Search Tags:Deep Learning, Edge Computing, Quality of Service, Task Scheduling, Resource Management
PDF Full Text Request
Related items