Font Size: a A A

Joint Machine Learning Model Deployment And Resource Management In Multi-access Edge Computing

Posted on:2023-11-26Degree:MasterType:Thesis
Country:ChinaCandidate:L GaoFull Text:PDF
GTID:2568306914979419Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence technology,intelligent applications are widely used and gradually popularized.As an important implementation technology of artificial intelligence and the mainstream direction of machine learning,deep learning tasks have computing intensive and time-delay sensitivity.Compared with traditional cloud computing,multi-access edge computing technology sinks computing resources from the network core to the edge,complements the cloud computing mode,makes tasks computed nearby,and has the advantages of alleviating the pressure of core network bandwidth,improving service response speed and high reliability.Resource management is the core problem in the field of multi-access edge computing.Using the separability of deep learning model for distributed deployment and joint optimization of multiple resources on this basis can improve the efficiency of resource utilization.The existing work on the partitioning and deployment of DNN and resource management in the multi-access edge computing environment does not fully consider the edge cloud cooperation scenario of multiple base stations,the joint management of task offloading and the partitioning and deployment of DNN,and the joint optimization of multiple resources.Therefore,this paper focuses on the deployment of machine learning model and resource management technology in multi-access edge computing environment.The main contents of this paper include:(1)Research on the partitioning and deployment of DNN and joint resource management under the edge-cloud architecture.Considering the scenario that the edge cloud association under multiple base stations provides multiple types of deep learning computing services for multiple devices,it is proposed to minimize the task processing delay per unit time by optimizing the partitioning and deployment strategy of DNN,wireless channel allocation strategy and computing resource allocation strategy.In the system model,the task queue is established and the queue delay is introduced.In order to efficiently solve this multivariable deep coupled mixed integer nonlinear programming problem,this paper proposes a heuristic algorithm.Firstly,the algorithm uses the graph-based device clustering algorithm to solve the channel allocation problem,and then obtains the partitioning and deployment strategy of DNN and computing resource allocation strategy on the basis of ensuring the stability of the queuing system.The complexity of each part of the algorithm is analyzed,and compared with three schemes in five scenarios,to verify the superior performance of this scheme.(2)Research on deep learning task offloading,partitioning and deployment of DNN and resource management under the end-edge-cloud architecture,consider the multi-type deep learning task computing architecture composed of devices,edge and cloud under multiple base stations,in which tasks can be computed locally at the devices or offloaded to use the computing services provided by edge and cloud.It is proposed to minimize task processing delay by optimizing task offloading strategy,partitioning and deployment strategy of DNN and computing resource allocation strategy.Based on the characteristics of resource scheduling in practical operation,this paper decomposes the original problem into the partitioning and deployment of DNN and computing resource allocation problem as the long-term optimization scheme of the system,and the task offloading problem at a certain time.An iterative heuristic algorithm is proposed to solve the former,and the latter is modeled and solved by game theory.The simulation comparison in four scenarios verifies the effectiveness of the partitioning and deployment of DNN and computing resource allocation scheme.In addition,the game model can quickly converge to Nash equilibrium and the simulation comparison in two scenarios can verify the superior performance of our task offloading strategy.
Keywords/Search Tags:multi-access edge computing, the partitioning and deployment of DNN, resource management
PDF Full Text Request
Related items