Font Size: a A A

Research On Key Technologies Of Mobile Edge Computing Serverless Architecture

Posted on:2022-02-06Degree:MasterType:Thesis
Country:ChinaCandidate:D ShiFull Text:PDF
GTID:2518306572451104Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Edge computing is a new research direction that reduces the amount of data transfer and latency and improves the quality of service by bringing the computational load closer to the user at the edge.And serverless computing is a new development and operation and maintenance model,where developers only need to focus on the logic of the application and do not need to perform server operation and maintenance.In this paper,we propose a framework of serverless computing for edge computing,analyzing the problems of current serverless computing systems and the improvements needed when combined with edge computing,and proposing a framework for single-node use.And the problems of multiple nodes are analyzed,and the improvements needed on multiple nodes are made in the problem of electrical contacts.In this paper,we first analyze the requirements of serverless computing under edge computing.First,because edge computing itself ought to serve devices in the nearest region,a large part of the reason for the emergence of edge computing is to reduce the latency of computation,so latency is an important performance indicator of the platform.At the same time,edge computing requires migration,user devices tend to have good mobility,and users will often need to migrate from one region to another,so edge computing systems need to have the ability to assist in migration.In this paper,we first propose a serverless computing system on a single node suitable for edge computing,which speeds up container startup through a more lightweight container runtime.User data is managed through a hierarchical system and mounts,and migration is not slowed down by user data loading through remote machine on.A model was also designed to predict service starts and reduce the number of cold starts.The final cold start time was reduced to less than 50 ms,and the number of cold starts was reduced by 44.6% compared to the fixed time warm-up cold start strategy.The system response latency was effectively optimized.In the multi-node case,we analyze a kind of resource waste caused by random allocation of services in the multi-node case and propose an optimization algorithm.Compared with random allocation,our algorithm optimizes about 15% of disk usage and 11% of memory usage.We also propose a solution for communication and data management of user task containers in the multi-node case.Finally,we designed an example of detecting whether a driver is driving safely in a v2 x environment to test the system.The system responds to each collected data with 17.4ms and can still be controlled in the range of 30 ms after a migration occurs,although the latency will increase.
Keywords/Search Tags:edge computing, serverless computing, task migration, vehicle-to-everything, container
PDF Full Text Request
Related items