Font Size: a A A

Study On Protocol Design And Resource Management Of Passive Optical Networks In End-to-end Communications For Edge Computing

Posted on:2021-03-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:X M ShenFull Text:PDF
GTID:1368330632450570Subject:Optical communication technology
Abstract/Summary:PDF Full Text Request
The increasing number of mobile devices and applications has led to a surge in network traffic,requiring higher network capacity.Meanwhile,driven by the ultra-reliable and low latency(uRLLC)services(e.g.,autonomous driving)in 5G,edge computing has emerged.Edge computing extends cloud services and functions to the edge of network(usually the access network),such as deployed at the access network central office,forming a small-scale edge data center,and deployed at the access network user-side facilities(such as base station,optical network unit,gateway,roadside unit),forming edge computing nodes,to deliver computing and storage services close to users,thereby greatly reducing transmission latency,and easing the congestion of core network and transport network.Edge computing brings computing resources to the access network.At the same time,it also transfers the issue of low-latency guarantees to edge computing facilities and access networks.Passive optical network(PON)technology plays a key role in data center networks and access networks due to high capacity,high transmission rate,low power consumption,and low cost.The integration of edge computing with PON and wireless access networks is an inevitable trend of the development of network architecture,providing a stable computing and communication foundation for end-to-end communications for edge computing.However,the diversification of 5G scenarios and quality of service(QoS)poses challenges to edge computing,PON and wireless integrated access networks.From a user perspective,it requires low latency,differentiated QoS requirements,and has strong mobility.From a network perspective,computing and communication resources are insufficient,resource utilization and communication efficiency is low.With the above considerations in mind,this thesis studies the protocol design and resource management of passive optical networks in end-to-end communications for edge computing,focusing on three aspects,i.e.,performance enhancing of edge data centers,flexible management of optical and wireless integrated access networks,and low-latency services guarantee.Edge data centers face multiple services access.The traffic is highly bursty,and the load is unbalanced.In order to support low-latency communications between edge data center servers to ensure rapid edge computing tasks,this thesis considers passive optical interconnects scheme in edge data center based on arrayed waveguide gratings(AWG)and optical splitters.A poll-based medium access control(MAC)protocol is proposed to support efficient collision-free multi-point to multi-point communication among servers.In order to effectively deal with traffic bursts at the top of rack in edge data center and unbalanced load among servers,this thesis has developed an applicable dynamic bandwidth allocation(DBA)algorithm that allocates resources in both time and frequency domains to ensure different QoS for services.Simulation results show that,for a typical link utilization on an optical interconnect at the top of the rack,the proposed resource allocation scheme ensures both low latency(<0.1ms)and a low packet drop loss rate(almost 0).The integration of edge computing with PON and wireless access networks is an important networking method for edge computing,and is the key for users to access and use edge computing resources with low latency.Meanwhile,5G will support multiple types of services,with differentiated QoS requirements,especially for stringent latency and reliability requirements.In edge computing and PON coordinated mobile backhaul networks,the mobile backhaul bandwidth is shared by traffic generated by multiple services between base stations and between base stations and edge data center,for example,the migration traffic generated by service migration between edge computing nodes,and other non-migration traffic.In order to ensure low latency,satisfy the differentiated QoS requirements of multiple services,and improve resource utilization,this thesis proposes a delay-aware bandwidth slicing scheme that dynamically and efficiently allocates bandwidth to migration and non-migration traffic to satisfy their different latency requirements.Simulation results show that the proposed scheme can support different QoS requirements while ensuring low latency,and the resource utilization of edge computing and PON coordinated mobile backhaul networks has been improved.Edge computing extends computing resources to user-side access network facilities(such as base stations).It has obvious advantages in supporting low-latency services.However,due to the limited communication and computing resources of edge computing nodes,and strong mobility of users,it is necessary to perform resource sharing through service migration between edge computing nodes,which poses challenges to the low-latency services guarantee.This thesis takes the vehicle to everything(V2X)as an use case of 5G uRLLC scenarios,focuses on supporting low-latency services and user mobility,proposes a QoS-aware service migration strategy between the edge computing nodes to reduce the impact of user mobility on latency during service migration.In order to overcome the problem of insufficient resources of edge computing nodes,better support service migration,and reduce the end-to-end delay of services,a resource management scheme for collaboration between edge computing nodes is proposed.We use Python and SUMO to build a simulation platform and using real traffic examples in Luxembourg city.The simulation results show that the end-to-end latency is closely related to mobile backhaul capacity and service migration delay.The proposed service migration strategy and the resource management solution can effectively support low-latency services.
Keywords/Search Tags:Passive optical networks, edge computing, low latency, communication protocol, resource management, edge data center, bandwidth slicing, service migration
PDF Full Text Request
Related items