Font Size: a A A

Effectiveness Analysis Of Cache Replacement Strategies In Named Data Networks

Posted on:2019-03-30Degree:MasterType:Thesis
Country:ChinaCandidate:Hunsoro Teshale AbebeTSLFull Text:PDF
GTID:2428330545452300Subject:COMPUTER TECHNOLOGY
Abstract/Summary:PDF Full Text Request
With the emergence and development of applications-oriented content,the main function of current Internet should achieve large-scale and efficient data distribution.But the host to host communication mode of current Internet limits the efficiency of data distribution.Therefore,some researchers in the field of future network or next-generation network propose various new and revolutionary Information Centric Networks and one of the most representatives is Named Data Networking(NDN).The NDN has a receiver or requester driven host to network communication mode and realizes packet-level data distribution based on Pull.However,for real-time streaming data applications,such as live video and webcast,it makes the user who needs the content send interest packets continuously and these packets have to be sent correctly in order to ensure real-time requirement.With the increase in the data packet generation rate,it will be much harder for the user to control the send time of these interest packets and it also causes huge operating overhead and the waste of resources of routers.The success of NDN architecture requires broad community involvement and commitment.NDN has gained momentum already,with participation by academia and industry.The intention of incremental deployment requires demonstrating that NDN can solve real world problems where TCP/IP based solutions are either problematic or non-existent.The NDN team also maintains an open-source implementation of the NDN protocol stack,a simulator,and a testbed to facilitate testing and broader community participation.The NDN can run over anything that can forward datagrams(Ethernet,Wi-Fi,Bluetooth,cellular,IP,TCP,etc.),and anything can run over NDN,including IP.Instead of trying to replace or change the deployed IP infrastructure,NDN can simply run over it.NDN can also leverage Internet's well-tested engineering solutions that have taken decades to evolve,such as conventions,policies,and administrative practices for naming and routing.The packets have unique names,and they are routed based on packet name lookup results.There are two types of packets in NDN networks,interest packets and data packets.If a user wants some content,the user will send out interest packets to express the requests.Data packets have been used to reply interest packets with a proper content.The NDN routers are capable of caching some amount of forwarded data packets.When there is an incoming interest packet to the router and the router has the requested content;it can send the content back to the clients immediately rather than forwarding the request all the way to the server.This feature can potentially save lots of bandwidth within the network.The NDN architecture should support globally unique,human-readable,secure and location-independent names.Therefore,the major issue is to develop a naming mechanism that can satisfy all these requirements.Currently existing naming approaches,like flat,hierarchical,and attribute-value,support some of the requirements.Flat names provide uniqueness and induce no overhead for finding the longest prefix matching.Flat names self-certifying and easily handled with highly scalable structure,like DHTs.However,flat names do not support name aggregation.Due to this,the use of flat name increases the routing table size and reduces network scalability.Still,there is no specific research whether flat names can provide required performance or not.Hierarchical names are human-friendly and support name aggregation.Thus,it minimizes the routing table size and update time,and makes network scalable.However,because of name aggregation,hierarchical names do not fully support persistence.As in NDN,Content Name(CN)shows content properties explicitly.The content caching has played an essential role in Named Data Networking(NDN).Caching the content in the content store(CS)of a node is analogous to the buffer memory in IP routers;however,the IP routers cannot reuse the data packet after forwarding.The Named-data Link State Routing(NLSR)is a routing protocol for NDN.Since NDN uses names to identify and retrieve data,NLSR propagates reachability to name prefixes instead of IP prefixes.The NLSR uses Interest/Data packets to disseminate routing updates,directly benefiting from NDN's data validity.The traditional networks also benefit from Information Centric Networking technologies,i.e.,content delivery networks,which aim to distribute content efficiently and quickly.Caching in NDN has several benefits.Caching contents produced by other nodes help to dissociate contents from their producers.In addition,it reduces the overhead at the producer side and avoids a single point of failure by making available multiple copies of the same content in the network.It provides high benefits to dynamic contents in case of multicast or retransmission due to packet loss.In NDN,the content is replaced from its Content Store(CS)based upon the underlying caching policy.Each router in the NDN network has the ability to cache individual packets.Every node comes with a content store that has an attached policy that determines how items in the content store replaced.Common policies include LRU(Least Recently Used),FIFO(First in First Out),and Random(RND).In the current NDN research,LRU is the most commonly adopted caching policy given the observation that about half of the caching benefits at packet level happen in the first 10 seconds.Cache Replacement Policy Development of ndnSIM uses the CS implementation of NFD,therefore,to create a new cache replacement policy.The users need to extend NFD's Policy class to implement new callbacks that are invoked when a new data packet is inserted into the CS,an existing data packet is deleted from the CS,and a data packet is about to be returned after a lookup match.In NDN,cache replacement policies are used to make opportunity for new most popular content and remove less popular content from the cache.There is a trade-off between the processing capability of the routers and cache replacement policy complexity.These policies must be less complex due to processing restrictions at the router.The evaluation of the effectiveness of caching policies like LRU,FIFO and RND aims at improving performance of caching based on different parameters.The existing research put efforts on designing cache allocation mechanisms without investigating the effectiveness of cache replacement mechanisms.In the highly dynamic network,content prioritization plays an important role in applications performance.High priority content will be available more in the network is compared to low priority contents.Low priority content will also suffer from high access latency.The big question is how to decide the priority of the content.The two-main priority-deciding factors can be content demand and the common information generated/exchanged among nodes.The proposed in-network content prioritization policy for the cache replacement and pointed out the benefits of naming data for information-maximizing cached content delivery in ad hoc networks.Each item is stored in the cache labeled as either hot or cold.Whenever an encounter occurs,contents labeled as hot is exchanged first before cold contents.For finding the hot content among the cached data,the authors have formulated the knapsack problem where items in the knapsack must maximize utility across all user query replies.Those utility-maximizing items are labeled hot.This thesis aims to assess the effectiveness of three cache replacement mechanisms,namely,LRU,FIFO and RND in terms of cache hit ratios in NDN via simulations.The simulation was done by using an open-source discrete event network simulator called ndnSIM by NS3 packages.The ndnSIM specific applications provide a convenient way to generate basic Interest/Data packet flows for various network level evaluations,including the behavior of forwarding strategies,cache policies,etc.These applications are realized based on NS3's Application abstraction and include several built-in tracing capabilities,including times to retrieve data.The packet flow in ndnSIM involves multiple elements,including NS3's packet,device,and channel abstraction,ndnSIM core,and processing by the integrated NFD with the help of ndn-cxx library.In this research work,the network for simulations includes 12 routers and 80 nodes.Each router randomly connects with other routers and every node connects with one router in a random way.Besides,all routers and nodes contain FIB,PIT,and CS components.Cache size used in CS could be 64Kbits,128Kbits,and 1024Kbits.The sending rate is set to 16,64 and 128 packets per second,which is sent by requester.The simulation is done under experimental duration 50,100,and 150 seconds,respectively.The simulations are carried out under various system parameters,including cache size,interest packet sending rate and experimental duration.The simulation results have shown that LRU and FIFO are suitable for NDN caching,and the cache hit ratios in both are matching when the same interest packet sending rate and the same experimental duration.RND has better cache-hit ratio than LRU and FIFO in some special situation.The cache hit ratio decreases when the interest packet sending rate increases no matter the replacement policy is LRU,FIFO,or RND under the same experiment duration.The cache hit ratio increases when the cache size increases no matter the replacement policy is LRU,FIFO,or RND under the same experiment duration.The RND has better cache-hit ratio than LRU and FIFO when the interest packet-sending rate is 16kbps.When the cache size is 1024,interest packet sending rate is 128,and experiment duration is 100,the cache hit ratio in LRU is 7.53%,FIFO is 7.95%,and RND is 3.94%.The cache hit ratios in LRU,FIFO,and RND decrease when the experiment duration increases.The cache hit ratio in LRU decreases 7.63%,FIFO decreases from 13.76%to 5.72%,and RND decreases 6.12%as the experiment duration increases from 50 to 150 under the cache size is 1024 and interest packet sending rate is 128.The cache hit ratio in RND decreases 18.47%when the interest packet sending rate increases from 16 to 128 under experiment duration is 50.While the experiment duration increases from 50 to 150 the cache hit ratio in RND decreases from 16.69%to 7.2%under the interest packet sending rate is128.The values of cache-hit ratios per object basis that distinguishes the caching policies are made in the overall effectiveness of in NDN caching.As the interest packet sending rate increases,the cache hit ratios decrease in LRU,FIFO,and RND.However,RND decreases more than LRU and FIFO in the cache hit ratio.For example,the cache hit ratio in RND decreases from 82.4%to 8.1%when the interest packet sending rate increases from 16 to 128 under the experiment duration is 50,but LRU and FIFO only decrease from 82.4%to 13.3%.Moreover,LRU and FIFO have better performance in cache hit ratio than RND when the cache size is 1024 and interest packet sending rate is 64 to128 under the experiment duration is 50 to 150.As a conclusion the cache hit ratio decreases when the interest packet sending rate increases no matter the replacement policy is LRU,FIFO,or RND under the same experiment duration.Future work will compare their performance under more system parameters and complicated scenarios.In addition,there are many complex content replacement policies existing.The evaluation of their performance is one future direction.Furthermore,developing more effective content replacement policies is the possible next work.It is known that NDN does not have any transport layer,the main responsibility of IP's transport layer has been shifted into the NDN forwarding plane.There are still open challenges to design effective and efficient forwarding strategies for different contexts and networks.
Keywords/Search Tags:Named Data Network, Content Centric Network, Cache Replacement Policies, Cache Hit Ratio
PDF Full Text Request
Related items