Font Size: a A A

Research On Critical Issues Of Performance And Security In Cloud Datacenters

Posted on:2021-01-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:X K HuFull Text:PDF
GTID:1488306503982279Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
As the wide adoption and deployment of cloud computing,the traditional Internet datacenters have been evolving into cloud datacenters.Performance and security are significant issues along with cloud datacenters all the time.This thesis focuses on the performance and security of cloud datacenters and keeps up with the trends from both academia and industry.We start with the three key and interconnected performance and security fields,i.e.,system virtualization,heterogeneous accelerator,and sensitive data security,and study three critical issues that need to be addressed now:the event path of I/O virtualization,the I/O interaction with heterogeneous accelerators in cloud datacenters,and the protection and usage of tenant's private keys in cloud environment.A research priority of system virtualization is the I/O virtualization,whose main bottleneck lies on the event path: the frequent interventions from Virtual Machine Monitor(VMM)trigger a large amount of costly VM Exits.This thesis first study the event path of I/O virtualization(i.e.,a key performance issue)to establish an efficient virtual I/O event path.The shortcomings of prior software solutions promoted the emergence of the hardware-assisted Posted-Interrupt(PI)technology,providing non-exit interrupt delivery and completion.Despite the usefulness of PI,it still has a distance to go before reaching an optimal virtual I/O event path: first,PI only acts on the interrupt path while guests' I/O requests may also trigger plenty of VM Exits;second,the PI-based interrupt delivery may still suffer a severe I/O processing latency from the scheduling of virtual CPUs.Aiming at an optimal virtual I/O event path,this thesis takes PI as a basis and proposes ES2,an efficient and responsive virtual I/O event system,to simultaneously improve bidirectional I/O event delivery between guests and their devices.ES2 first introduces a hybrid I/O handling scheme to efficiently deliver guests' I/O requests.This hybrid scheme performs proper switches between the existing exit-based notication mode and a newly-added non-exit polling mode,reaping the strengths of both notication and polling.And it provides two mode switch algorithms: a generic perceptive mode switch algorithm and a specic optimistic mode switch algorithm.Then,ES2 leverages the intelligent interrupt redirection mechanism to optimize the PI delivery.It redirects virtual interrupts to the most appropriate virtual CPU so as to effectively enhance guests' I/O responsiveness.Specifically,for the selection of the most appropriate interrupt destination virtual CPU,it takes cache affinity as the prime consideration to guarantee the performance of I/O processing if there are multiple candidates,and provides the precise interrupt delivery strategy for the case that needs to locate the first running virtual CPU.The comprehensive evaluation demonstrates that ES2 can effectively reduce I/O-related VM Exits,greatly enhance I/O virtualization performance in terms of throughput and I/O latency,and provide a good performance scalability.The development of I/O virtualization promotes the heterogeneous accelerators(as new-type I/O devices)to enter into cloud and become a promising solution to increase the computing power of cloud datacenters.The current research priority is how to improve the system/application acceleration performance,whose key is not only the accelerator itself,but also the I/O interaction(i.e.,offload I/O).This thesis next studies the I/O interaction with heterogeneous accelerators in cloud datacenters(i.e.,a key performance issue)to enable the close collaboration between CPU and accelerators and effectively enhance the acceleration performance.This thesis takes the widely-adopted event-driven Web workloads as the research subject.We first reveal that the direct integration of heterogeneous accelerators(i.e.,the straight offload mode)suffers from the frequent blockings in the offload I/O,degrading the acceleration performance;then analyze and compare two novel high-performance offload modes based on Intel QAT accelerator:the asynchronous offload mode for SSL/TLS processing(i.e.,asynchronous concurrency)and the pipelining offload mode for HTTP compression(i.e.,synchronous concurrency).Since both of the two novel offload modes allow concurrent offload tasks from a single application process/thread,the blocking penalty can be amortized or even eliminated,and the utilization rate of the parallel computation units inside the accelerator can be greatly increased.On the basis of the asynchronous offload mode for SSL/TLS processing,this thesis proposes two important performance optimizations to further improve the I/O interaction: the first one is the heuristic polling scheme,which is integrated into the application to avoid possible frequent thread switches and,more importantly,use the application-level knowledges to guide the polling action,with consideration for both efficiency and timeliness;the second one is the kernel-bypass async event notification,which eliminates the expensive user/kernel switches for async event delivery to further enhance the application acceleration performance.Besides,in the prototype implementation based on Nginx,this thesis extends the simple SSL Engine setting scheme into the SSL Engine framework that provides flexible and powerful accelerator conjurations and facilitates developers to introduce other types of crypto accelerators.The abundant experiments demonstrate that both of the asynchronous offload mode and the pipelining offload mode can greatly increase the acceleration performance,and the proposed two optimizations can also provide further performance improvement.Combining the above analysis,optimizations and evaluation,this thesis can obtain a series of research conclusions,offering references for cloud datacenter workloads that intend to use the heterogeneous accelerators and achieve high-performance offloading.The emerging of heterogeneous accelerators provides new possibility for achieving private key security and high-performance crypto calculation,but it may not directly adapt to the cloud environment.This thesis finally studies the protection and usage of tenant's private keys in cloud environment(i.e.,a key security and performance issue),with the goal to cover both key protection and high-performance crypto calculation.The existing Keyless or Keyguard solutions suffer from either performance or security limitations.A novel architecture,represented by Intel KPT,combines Trusted Platform Module(TPM)for trust and key provisioning and crypto accelerator for crypto offloading.However,the straightforward use of the new “KPT-like architecture”to protect cloud tenants' private keys faces challenges on scalability(support adequate number of co-resident virtual machines or containers),key provisioning latency and transparency.This thesis designs Cloud KPT,a comprehensive key management system,to resolve the above challenges.Based on the idea of key wrapping,Cloud KPT introduces a unique Tenant Symmetric Key(TSK)for each tenant to work as the master key to use the “KPT-like hardware”and encrypt all the private keys of the tenant,thus addressing the former two challenges.By adopting the strategy of loading private keys from accelerator engine,along with the special encryption scheme for private key encryption,the upper application and crypto library is unaware of the underlying key protection mechanism,thus addressing the transparency challenge.Considering that TSK,replacing private keys,becomes the tenant's most significant key,Cloud KPT incorporates certificate trust into the TPM 2.0 key duplication protocol to guarantee secure TSK provisioning,providing not only secure transmission but also secure destination storage that is protected by genuine TPM and can only be used by the valid tenant.For the in-cloud key server that needs secure TSK storage,Cloud KPT provides the TPM-based two-phase TSK duplication scheme to endow the in-cloud key server solution with high security,low cost and flexible key provisioning.Besides,this thesis presents how to reuse Cloud KPT in the case of SGX as the trusted hardware technology and provides two design ideas: direct reuse and direct reuse.The comprehensive evaluation demonstrates that Cloud KPT can greatly expand the protection capacity for private keys and effectively reduce the key provisioning latency.Also,Cloud KPT has a low runtime overhead and benefiting from hardware acceleration,it still significantly outperforms the software baseline for private key operations and SSL/TLS processing.
Keywords/Search Tags:Cloud Datacenter, Performance and Security, Virtual I/O Event Path, VMM Intervention, Heterogeneous Accelerator, Offload I/O, Private Key Protection, KPT-like Architecture
PDF Full Text Request
Related items