Font Size: a A A

Adaptive Container Pool-Based Cold Start Optimization For Serverless Computing

Posted on:2022-03-03Degree:MasterType:Thesis
Country:ChinaCandidate:Z K WangFull Text:PDF
GTID:2518306572990969Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Serverless computing platforms provide services at function granularity,where users simply focus on writing business logic code and pay as needed for the resources consumed by function execution.The serverless computing platform will allocate resources for the function instance,and the function will suffer a cold start before execution.Cold Start severely affects the user experience by increasing end-to-end latency for workloads.This phenomenon is associated with a delay occurring due to the provision of runtime to execute the function,which mainly includes three stages: code pulling,containerization,and runtime environment preparation.Moreover,when the function is executed,the serverless computing platform usually reclaims resources allocated to the function instance,so the next time the function is called,it will still experience a cold start.In this paper,we present Ace,an adaptive container pool-base cold start optimization scheme for serverless computing,which includes three parts: the priority queue-based container pool,the dynamic adjustment mechanism of adaptive container pool,the shared cache and parallel pull mechanism of files.The priority queue-based container pool can reduce the frequency of cold start or the delay of cold start.By prewarming and reusing containers,container creation is avoided as much as possible.The containers are prioritized according to the cumulative hit rate of imported dependencies in the container.When a request arrives,a local maximum lookup policy is used to find and allocate a container for the function instance that contains as many dependencies needed by the function as possible.In the dynamic adjustment mechanism of the adaptive container pool,the container pool is scaled up and down to match the request rate and reduce the memory footprint as much as possible by periodically monitoring the perceived requests and the usage of system memory resources.The shared cache and parallel pull mechanism can further shorten the initialization time of the function runtime environment.By caching code files and function dependencies locally,files can be reused and shared by multiple containers through bind mount.For nested dependencies,parallel breadth-first pre-analysis algorithm and parallel download and installation strategies can speed up the download and installation of dependencies.Therefore,Ace can effectively mitigate the cold start delay and reduces endto-end latency for workloads.Performance results show that Ace can reduce the average end-to-end latency of requests by up to 69.79% compared with Open Lambda,an open-source serverless computing platform.Compared to the strategy of using a fixed-size container pool,Ace can effectively deal with workload changes,reducing the average end-to-end latency of requests by up to 24.1% with less memory footprint.
Keywords/Search Tags:Cloud Computing, Serverless computing, Cold start, Container, Adaptive container pool
PDF Full Text Request
Related items