Font Size: a A A

Research On The Optimization Mechanisms Of Cache Redundancy In Named Data Networking

Posted on:2017-02-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:H YanFull Text:PDF
GTID:1108330491951538Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
With the innovation of applications based on Internet technology, and the emergence of new content-oriented services, it is becoming increasingly apparent that there are many drawbacks and shortcomings in the original design of current network. In order to explore the development direction of Internet, researchers have studied and proposed many systems and architectures of future information network. Named Data Networking (NDN) uses ’content’ instead of’Internet Protocol (IP)’as the thin ’waist’ of the network hourglass model. Based on the content-centric communication paradigm, NDN becomes one of the most important research hotspots. However, various redundant contents are delivered and cached in the network due to the Flooding forwarding strategy and Leave Copy Everywhere (LCE) caching strategy in NDN, which not only increases the transmission overhead and decreases the transmission efficiency, but also wastes the limited cache resources and reduces the cache hit. Therefore, this thesis mainly focuses on the cache redundancy problem, and puts forward the optimization mechanisms of cache redundancy from the perspectives of forwarding strategy, caching strategy and routing mechanism.The main contributions and innovations of this thesis are summarized as following:1) As for the transmission and caching problem of redundant contents caused by Flooding strategy, a Counteract Redundant Data (CRD) forwarding strategy based on branch routers is proposed. It introduces a new packet named Data ACKnowledgement (DACK) which is used to stop forwarding the received Data packet in other paths so as to reduce the transmission of redundant contents. A transmission model based on tree structure is established according to single and global branch router triggering scenarios, and then the forwarding processes of Interest, Data and DACK packets are analyzed. In addition, the calculation formulas of the transmission overhead of three forwarding strategies are given. The performance of CRD is also discussed in the scenarios of multiple content requesters and packet loss. Simulation results indicate that CRD can effectively reduce the transmission of redundant content, decrease unnecessary cache, and has a stable performance with link failures. Furthermore, it also meets the requirements of scalability.2) With regard to the duplicate caching problem caused by undifferentiated caching of LCE caching strategy, a two layers Hierarchical Cluster-based Caching strategy (HCC) is proposed. Routers in Core Layer are not allowed to cache contents so that they can focus on data interaction. And routers in Edge Layer choose clusterheads according to clusterhead election algorithm and form the Edge Clusters. The importance of content routers, the classification of content popularity and the corresponding correlation matrix of cache probability are calculated in Edge Cluster. According to the information allocated by clusterhead, each content router implements its respective caching decision that content router of higher (resp. lower) importance caches more (resp. less) popular contents with higher (resp. lower) probability. Simulation results indicate that HCC can significantly reduce the request time, the hops of cache hit, the number of cached content and cache replacement, as well as improve router hit.3) Due to the problem that neighbor cache resources cannot be fully utilized in on-path caching and routing mechanism, and the scalability problem caused by off-path caching and routing mechanism, a K-Medoids based intra-cluster Hash Routing mechanism (KMHR) is proposed. It combines on-path probability caching and non-cooperation routing with off-path caching and implicit cooperation routing, which can reduce redundant contents while locating the cached content accurately. KMHR is designed in two parts:content router selection process based on K-medoids algorithm and Hash routing process based on content popularity. The former is used to select several content routers as medoids routers in Edge Cluster. According to the content popularity, the latter is used to execute Hash routing or the shortest path routing to request from medoids routers or content providers. KMHR guarantees the uniqueness of contents with high popularity, thus redundant contents will be significantly reduced. Meanwhile, the scope for updating content popularity is limited to an Edge Cluster, so that KMHR can achieve the balance between caching efficiency and scalability. Simulation results indicate that KMHR has the shortest request time, the optimal routing gain, less number of cached content and the lowest cache replacement.
Keywords/Search Tags:Named Data Networking, Cache Redundancy, Forwarding Strategy, Hierarchical Cluster, Caching Strategy, Implicit Cooperation, Hash Routing
PDF Full Text Request
Related items