| With the rapid development of cloud computing technology,more and more scenarios use container landing.The container resource management and scheduling platform,Kubernetes,has become the de facto standard for container arrangement in the industry due to its high availability,automation and scalability.For large-scale cluster computing environment and large container application requirements,it is necessary to study container scheduling technology based on Kubernetes to resolve the contradiction between the utilization and stability of cluster computing resources.At present,with the maturity of cloud native technology,it is possible to build a unified container scheduling,management,operation and maintenance solution based on Kubernetes.However,the dynamic nature of cluster computing resources and the diversity of container resource requirements pose challenges to container resource scheduling based on Kubernetes.In order to solve this problem,this paper develops an optimal container dispatch technology which can take into account both the dynamic and the diversity of CPU and GPU computing resources,and constructs an automatic,scalable and real-time container dispatch management system.Specifically,the main research contents of this paper include:1)A load balancing dispatching technology for container multidimensional resource requirements is presented.For the default scheduling mechanism of Kubernetes only considers CPU and memory,which can not meet the fine scheduling requirements in edge computing scenarios,the E-KCSS scheduling scheme of CPU computing resource container based on Kubernetes is designed from four levels: control layer,scheduling plug-in layer,monitoring layer and node agent layer.E-KCSS uses five indicators,CPU,memory,bandwidth,disk,number of Pods,as scheduling factors,to achieve the diversity of container scheduling,and uses the time series database monitoring data for cluster computing resources as scheduling drivers to achieve the dynamic container scheduling.In order to solve the problem that Kubernetes’ preset weight factor can not meet the personalized resource needs of containers,a weight adaptive mechanism is introduced,which can calculate the multidimensional resource utilization rate of nodes and the resource requirements of containers,automatically solve the multidimensional resource weight set of containers,and select the optimal node for containers according to the balance degree of computing node resources.The results show that,compared with the default Kubernetes scheduling mechanism,E-KCSS increases the upper limit of container deployment by 23.63% and reduces cluster resource imbalance by 6.87% in heterogeneous request scenarios.2)A fine-grained shared container scheduling technology for GPU computing resources is presented.First,a fine-grained shared container scheduling architecture for GPU computing resources is designed.Based on the message cache architecture,this architecture is designed from two aspects: the monitoring layer and the dispatch layer.It improves the timeliness of GPU computing resource scheduling by lightweight optimization of Kubernetes native component API Server and real-time perception of Kubernetes resource objects.To solve the problem of missing global view of GPU resources caused by Kubernetes’ inability to finegrained control GPU devices,a GPU device collector G-RCFK based on Kubernetes is designed to achieve uniform control of GPU devices and fine-grained real-time acquisition of GPU index information.To solve the problem that GPU containers can not be shared because Kubernetes default dispatching mechanism dispatches GPU containers by number of blocks,a shared container dispatching technology,Nvi-Scheduler,is designed for GPU resources.Based on user priority and resource priority,this technology achieves shared container scheduling by combining GPU utilization and display memory utilization.The experimental results show that,compared with Nvidia-Device-Plugin and Kube Share schedulers,the proposed technology is more flexible in dispatching GPU containers,improves the deployment efficiency by 18.68%,and enables the sharing of GPU resources among containers.3)Based on 1)and 2)research content,a container dispatch management system is designed and implemented from four aspects: physical equipment,infrastructure,business services,user interface and so on,which is based on Kubernetes and integrates lightweight container technology.This system integrates the mixed dispatching of CPU and GPU computing resources,and provides a convenient,efficient and diverse management platform for container dispatching.The design and implementation of key technologies,such as container life cycle management,visual management and container Web shell interaction,are discussed in detail.Experiments verify the functions of the system. |