Font Size: a A A

In-Network Computing Technology And Its Application Research

Posted on:2021-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:P ZhouFull Text:PDF
GTID:2428330623468235Subject:Engineering
Abstract/Summary:PDF Full Text Request
At present,the research on in-network computing is still in its infancy.The focus of the research is how to use programmable network technology to better support various in-network computing applications.This thesis mainly explores typical in-network computing applications based on P4 language,including the following three research contents.At present,people have put forward higher requirements for internet services to respond quickly to content requests,but factors such as excessive server load and transmission distance make it difficult for the content request response time to meet people's requirements.The use of content caching technology to achieve the nearest response to the request is an effective method to reduce the response time of the content request.The content delivery network(CDN)implements content caching by deploying a caching server at the edge of the network,but does not really implement in-network caching of content.The content centric network(CCN)uses routers for high-speed innetwork caching,but CCN is a brand-new network architecture,and there is currently no mature hardware,so it will take a long time to be deployed.Based on the idea of CCN's in-network caching,this thesis uses programmable switches as content request identification devices,and realizes in-network caching through the way that programmable switches and cache servers work together.In order to choose a suitable cache node placement location,this thesis establishes a MILP model based on minimum hops.The simulation results show that the content caching method based on the programmable switch can effectively improve the response speed of content requests.For machine learning tasks with data volumes up to PB and EB levels,the distributed training method is generally adopted,that is,multiple workers use different data sets for training at the same time,and then send the trained parameters to the parameter server for aggregation and update the model after the parameters are returned to each worker for the next round of training.In the distributed machine training scenario,the calculation task performed by the parameter server is very simple,but the amount of data interacted between the worker and the parameter server will be very large,and the network performance has become the bottleneck of the distributed machine learning training speed.In this thesis,the parameters trained by the worker are aggregated on a programmable switch,and the retransmission mechanism is used to solve the problem of parameter packet loss.Simulation results show that the machine learning parameter aggregation method based on programmable switches can effectively improve the training speed.Network traffic measurement refers to the statistics of traffic information in the network,and then provides input information for network management applications such as route planning,intrusion detection,and fault analysis.Sketch is a commonly used network measurement method.It uses a hash function to hash the stream ID into an array to count the frequency of stream occurrences.The implementation of Sketch needs to be able to quickly process a large number of network data streams,but the current software implementation method is too slow to meet the requirements of processing network data streams at linear speed.The hardware implementation method must use high-value special hardware equipment,and the flexibility is poor.This thesis implements Sketch using a programmable switch.This implementation is flexible enough and can achieve line speed measurement of flow.
Keywords/Search Tags:in-network computing, programmable switch, content caching, distributed machine learning, network traffic measurement
PDF Full Text Request
Related items