| Edge computing integrates computing,storage,and network resources at the network edge to provide on-demand,real-time services and meet agile service demands,it is considered one of the important trends for the future development of cloud computing.In recent years,cloud-native computing has become the mainstream of cloud computing,enabling rapid and on-demand application orchestration through service-oriented and loosely coupled architectures.Cloud-native orchestration systems like Kubernetes and microservice governance technologies like Service Mesh are seen as the core of future distributed systems.Edge computing,as an extension of cloud computing,follows a similar development approach by leveraging lightweight cloud-native support technologies such as containers to achieve resource isolation,rapid deployment,and efficient management.Currently,the introduction of cloud-native technology into edge computing and the development of edge-native ecosystems have been widely recognized by the industry.Despite the enormous potential of edge-native computing,there are still many challenges in practical applications,such as the difficulty in mitigating communication delays between microservices.Therefore,this article will optimize microservice communication from two aspects: the deployment stage of microservices and the traffic governance stage of microservices at runtime.Currently,Kubernetes "best-effort" scheduling strategy ignores the impact of interservice communication on performance when scheduling microservice applications,resulting in high cross-server communication costs for some microservices,especially in edge cloud environments.At the same time,to address the complexity of microservice communication,service mesh decouples the control plane and data plane in service-toservice communication.In the data plane,each microservice is attached to a sidecar proxy to encapsulate complex communication between microservices.The behavior of microservice sidecar proxies is controlled by a centralized controller in the control plane.Clearly,this inevitably introduces additional communication control delays,thereby affecting the response time of microservices.As microservices are widely distributed,the communication latency between edge servers is relatively long,making this problem particularly prominent in edge clouds.This thesis first addresses the communication deficiencies of the Kubernetes scheduler(Kube-scheduler)and proposes a modification to its priorities algorithm to schedule highly communicating microservices on the same edge server to minimize communication overhead.However,the greedy approach of placing highly communicating microservices on the same edge server may overlook the matching of each microservice’s resource(i.e.,CPU and memory)request and the remaining resources on the edge server.When scheduling specific resource-intensive microservices,this approach may generate a large amount of resource fragmentation,even causing performance bottlenecks for a single resource and leading to resource waste.Therefore,this thesis proposes a communication and resource-aware microservice scheduling architecture,called Kube-MSRM,to ensure low latency of cloud-native applications while fully balancing resource consumption within edge servers.According to extensive experimental results,Kube-MSRM ensures the required Qo S for cloud-native applications while reducing the average resource consumption gap between edge servers by 2.54% and 39.39%,respectively,compared to Kubernetes and state-of-the-art scheduling frameworks under low loads.Under high loads,it reduces the average resource consumption gap by 48.29% and 3.26 times,respectively.Further,to reduce the impact of communication control on the quality of service of cloud-native applications when the service mesh governs inter-service traffic in the edge cloud.Rule caching has been proposed to address such issues.However,how to manage rule caching has not been studied so far.To this end,this thesis first delves into the architecture of a service mesh to uncover the advantages and disadvantages of two caching sites,namely the controller and the microservice Sidecar agent.Then,the problem of how to achieve fast service-to-service communication while balancing the use of the two cache sites,taking into account cache capacity constraints and request rate heterogeneity,is investigated.The problem is formulated in the form of integer linear programming(ILP)and is shown to be an NP-hard problem.A dual-standard randomized rounding(DSRR)based algorithm is also proposed and its achievable approximation rate is analyzed theoretically.Experiments based on trace-driven simulations show that the RR-based approximation algorithm(DSRR)reduces the response latency of communication control requests by a factor of 4.35 on average while saving the cache footprint by 24.75%.To alleviate the issue of high communication latency among microservices in deploying and governing cloud-native applications in edge clouds,this thesis proposes to optimize microservice communication from two aspects: the deployment stage of microservices and the traffic governance stage of microservices at runtime,to ensure the quality of service of cloud-native applications in edge clouds. |