Font Size: a A A

Proactive Load Optimizer Model In Container-based Cloud

Posted on:2019-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y WuFull Text:PDF
GTID:2428330590467483Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the development of cloud computing in the last ten years,there more and more software producers and service providers trying to mitigate their services from traditional clusters to clouds for lower costs and other benefits brought by virtualization.Containerization is becoming increasingly popular in virtualization nowadays and Docker is one of the most popular container platforms to deploy and manage applications.Docker introduces Docker image which is considered as the basement of container-based clouds.Besides,as the internet is widely spread and in common use,load varies more dynamically and more intensively.Whereas cloud users can use certain tools,provided by cloud providers,to scale services automatically by setting some thresholds,the adjustment always lags behind the load changes and depends on subjective estimation.What's more,the definition of replica is equivalent to the number of instances in all container cluster frameworks,which are used as the infrastructure of container-based clouds.The lack of suitable scheduling policy and management mechanism in these frameworks makes services difficult to cope with dynamic load changes.Fast startup and spread of services provides services the ability of reacting to bursting load changes rapidly.In this paper,we introduce a proactive load optimizer model in container-based cloud to accelerate the process of service creation and spread in container-based clouds to cope with dynamic load changes.The models makes scale decisions in advance,depending on load predictions according to history data.What's more,the model leverages caches from Docker image filesystems to speed up the creation and spread of services in container clouds,under the promise of meeting their availability requirements.In the model,we monitor the resources usage status,such as memory,cpu time and network bandwidth,etc..We take these resource usage status as the measurements of real-time loads.Later,we use certain prediction models to predicate the resource usages in the next periods according to history monitor data.Provision module is used to get the accurate number of instances accroding to resource usage predictions in the model.As predictions have no accuracy promises for the actual load changes,we use optimization module to adjust the scale when real-time load is out of some thresholds.After all,management module is used to make the final decisions about the scales of corresponding services accroding to their replica requirements.In the scheduling module,the service will be scaled according to the confirmed size.In the model,we introduce a heuristic scheduling algorithm,which leverages the layered file system in Docker image and the feature of content addressability,to make final scheduling decisions for services,in consideration of replica distributions,reusable buffered layers and resource allocations.In the algorithm,we try to minimise the time cost during the download and installation of dependency packages and environments to speed up the process of service creation and service scaling.Finally,we design some experiments to validate the model.The accuracy of predications for load changes in the model is acceptable.By comparing the model with Docker swarm,the results show that it's much faster to create a service and scale out a service with the help of the model.The time consumption of scaling in a service is almost the same in both Docker swarm and the model.We introduce a new constraint as replica to assure services not violating their replica requirements with the model.All of the results show that the model has a better performance for services in container-based clouds with dynamically varying loads.
Keywords/Search Tags:Container-based Cloud, Docker Image, Load Changes, Load Prediction, Cluster Scheduling
PDF Full Text Request
Related items