Font Size: a A A

The Architecture Design Of Single Caching Node And Improvement On In-networking Caching Performance For Information-centric Networking

Posted on:2019-04-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:L DingFull Text:PDF
GTID:1318330542994140Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of Internet technology,the main network applications have been gradually changed to content access and information service.However,the traditional TCP/IP architecture has some drawbacks in content distribution,such as poor scalability,lack of flexibility and so on.In this context,Information-Centric Networking(ICN)has been proposed by researchers and also gained widespread attention.The success of ICN architectures depends on their ability to provide(i)large caches,able to(ii)process data traffic at line speed.However,the I/O speed of storage devices that can provide large cache can not meet the requirement of the network element,which has high wire-forwarding speed.The introducing of the block device will have a big impact on the forwarding performance of the network element.How to design a single caching node that can satisfy both requirements is an urgent problem to be solved.In recent years,Software Defined Networking(SDN)has been widely applied to many areas,such as network management,architecture design and so on due to the separation between the control and forwarding plane.SDN also brings about new op-portunities for designing a single caching node in ICN.In this dissertation,we first consider designing a single caching node based on SDN technology and propose a split architecture that can support terabyte-scale caching.We then try to improve the per-formance of in-network caching from the perspective of caching insertion filtering and caching decision strategy.In summary,the main research work and innovations of this dissertation are as follows:1.In order to cope with the problem of speed mismatch between high-speed packet forwarding operation and low-speed block I/O operation when block devices are used as content store of the network element,this dissertation proposes a novel split architec-ture based on Protocol Oblivious Forwarding(POF).In this architecture,the network device is split into two parts:switch end and storage end.This architecture guaran-tees the forwarding performance of the switch end by decoupling the cache operation from forwarding operation completely.In this architecture,switch end can distribute requests/contents to different storage ends depending on the load balancing strategy by extending more storage ports so that the workload required to be processed at each storage end can be reduced to not exceed the SSD throughput limitation.Multiple stor-age ends therefore can utilize large-capacity SSDs to meet terabyte-scale capacity re-quirement.This architecture takes into full consideration that the switch end is fully programmable,and let the storage end process a proprietary SSCP(Switch end and Storage end Communication Protocol)protocol and switch end finish the conversion from the network protocol to SSCP protocol or vice versa.In order to solve the packet dependency problem,we propose using Linear-Match table to keep states(LMKS)in the data plane.As for protocol conversion,we propose some efficient methods to speed up this processing,which include the design of proprietary operation to support the fast encapsulation of SSCP header,the reducing of packet data copy operation in the process of constructing SSCP payload and so on.Experimental results show that LMKS can im-prove forwarding latency and throughput significantly when the POF switch processes the applications that require keeping states.The latency of packet-switching operation in the split architecture can be improved one or two orders of magnitude than that of an unsplit switch.Packet-switching operation for the simplified ICN packets or SSCP packets can be implemented at a line speed of over 9 Mpps on a commodity server with multi-thread design when we ignore the overhead costs associated to NIC operations.2.Considering the fact that the typical Internet traffic pattern is characterized by a high fraction of contents which are requested only once over a long period of time,and SSD suffers from limited lifetime,this dissertation proposes a lightweight cache insertion filtering scheme based on the LRU queue and hash table.This method utilizes the LRU queue,which has a limited length,to control the statistical period and cache replacement when the queue is full,and a hash table to record the requested content and the corresponding request times.By preventing these unpopular contents with a low request frequency from entering into the cache,we can reduce the number of writing operations on SSD and improve the availability of the cache system.This method takes into full consideration that lookup operation on the hash table can bring about perfor-mance problem,and dimension the hash table to ensure that the size of each bucket is fixed to one cache line,i.e.,64 bytes.This can guarantee that the bucket can be re-trieved with a single memory read operation to be located in CPU L1 cache.Therefore,we can avoid the unnecessary reading operations on the slow memory device in case of collisions.The experimental results on two traces representing the two different typical popularity distribution show that our proposed scheme only requires about 200 CPU cy-cles,which is less than 1%for the total cycles consumed for processing a pair of Interest and Data packets.On the single-layer cache system,our proposed scheme improves the cache hit by 10.27%and 48.6%for these two traces respectively when compared with the unfiltering scheme.On the hierarchical cache system,our proposed scheme can re-duce the number of writing operations on SSD without comprising the cache hits when compared with a lightweight probationary insertion filtering scheme.3.Since the default caching strategy of ICN,i.e.,leave copy everywhere strategy,will cause a high degree of redundancy as all caches along the delivery path,this disser-tation proposes a content Popularity Ranking and node Importance Ranking Matched(PRIRM)caching strategy to reduce cache redundancy and improve content diversity.This strategy takes full account of the popularity ranking and node betweenness ranking to address the problem-where to cache this content along the delivery path.In order to estimate content popularity ranking,this strategy requires each caching node to main-tain a popularity table to record the request times of each content.In order to avoid the memory overhead and lookup overhead because of a very large popularity table,we propose a time window based method to update the popularity table in time,and delete the record of a content that is not requested during this time window.The simulation results in ndnSIM show that the time window based method for estimating content pop-ularity ranking achieves a considerable performance as the global popularity in PRIRM strategy.Compared with three benchmark caching strategies in recent literatures,i.e.,leave copy everything strategy,a centrality-based strategy and a path-capacity-based strategy,PRIRM strategy can result in a hierarchical content distribution,thus reducing cache redundancy of different nodes.The sensitivity of PRIRM strategy against differ-ent parameters,such as content number,popularity distribution,and network topologies,shows that PRIRM strategy still has the lowest average hop count an Interest packet re-quires for getting a cache hit.The above methods have been partially applied to the Special Fund for Strategic Pilot Technology Chinese Academy of Sciences(Grant No.XDA06010302).
Keywords/Search Tags:Information-Centric Networking, In-networking Caching, Protocol Oblivious Forwarding, Split Architecture, Wire Speed Forwarding, Cache Insertion Filtering, Caching Decision Strategy
PDF Full Text Request
Related items