Font Size: a A A

Research On Cache-oriented Data Name Lookup Acceleration Mechanism

Posted on:2019-01-25Degree:MasterType:Thesis
Country:ChinaCandidate:K YueFull Text:PDF
GTID:2428330545964987Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of network technology,Internet applications have penetrated into every aspect of people's work and life.People pay more and more attention to the data content transmitted in the network.However,the existing TCP/IP architecture uses the address as the core,and it is difficult to meet the application requirements in terms of scalability,dynamics,and security.In view of this,Named Data Networking(NDN)proposes to shift the focus of network attention from address to data content,and subvert the traditional TCP/IP network communication model.In the NDN data plane,data name lookup is a key technology that directly determines the performance of the data plane forwarding packets and resource overhead.Compared with the IP address,the structure of the NDN data name is complex,indefinite and has no theoretical upper limit,which brings greater challenges to the data name search.A prefix tree-based data name lookup scheme can hardly meet performance requirements without considering dedicated hardware acceleration.The hash table and Bloom filter,although simple in operation and fast in average search speed,cannot be directly applied to prefix lookups,and be careful with hash conflicts and false positives.Prefix Bloom Filters(PBF)are based on prefix length distributions,combined with hash tables and Bloom filters to achieve lookup acceleration while reducing the false positive rate.However,its storage utilization is low and most of the storage space is wasted.In view of this,this paper designs a more compact structure and corresponding search algorithm based on the Prefix Bloom Filter,and compresses the storage overhead on the premise of guaranteeing search speed,false positive rate and accuracy.Experiments show that this solution can save nearly 54%of the storage space,and the search speed has also improved.In the case of the data set having the full match mode and the full mismatch mode,the average look-up delay decreased by 11.5%and 7.5%,respectively.It can be seen that this solution can not only compress the storage space but also improve the search performance.On the other hand,in the conventional NDN packet forwarding process,multiple tables need to be checked.In some specific scenarios,such as streaming media transmission,a large number of packets with the same or similar data names need to be processed in a short time.Therefore,in this paper,after an in-depth analysis of the NDN forwarding pipeline,a Packet Cache(PC)is designed to use the packet type and data name(remove segment number)as the joint key to directly index the required operations for reducing unnecessary look-up tables.We use interest packages and data packages as data sets to compare with conventional search schemes.The results show that when the data name repetition rate is more than 30%,the use of packet cache works well;at the repetition rate at 100%,for example,when transferring a movie,using the packet buffer designed in this article can reduce the look-up table overhead by about 60%.
Keywords/Search Tags:Data Name Lookup, Hash, Prefix Bloom Filter, Packet Cache
PDF Full Text Request
Related items