Font Size: a A A

The Buffer Cache Replacement Policies In The Networked Storage Servers

Posted on:2011-01-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y J ZhaoFull Text:PDF
GTID:1118330341451748Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the vigorous development of Internet and digital society, the traditional direct attached storage systems are difficult to meet the challenge of explosive growth of data, leading to constant investigation on new storage system architecture. As the integration technology of storage and network, networked storage systems have many good characteristics, such as high performance, high capacity, ease of management and scalability, and have been increasingly used in many high performance computing fields, such as finance, oil, electricity, telecommunication, etc.As a major software solution to I/O bottlenecks, caching is a very important research issue for storage systems, no exception for networked storage systems. Cache is actually a relatively high speed memory buffer for holding data, to allow fast re-access of subsequent request. Because cache is usually much smaller than the storage on the next level of I/O path, when cache is full, an appropriate data block needs to be discarded to reclaim space for new data blocks at an appropriate time. A replacement policy is the basis for this choice, which is a key factor to affect the cache performance. Currently, most of replacement policies are derived from researches on direct attached storage systems, which are not aware of multi-client cache hierarchy and access pattern, so they perform poorly when used in the networked storage servers.This thesis focuses on the research of cache replacement policies to solve the negative impact of multi-client cache hierarchy and access pattern in the networked storage servers. The work of this thesis mainly involves three aspects: isolated server, single client and single server, multi-client and single server, in which four replacement policies with low miss penalty and high hit rate are proposed to enhance the cache performance for various workloads and applications. Specifically, the thesis makes the following contributions:1. Since disk access has sequential locality, that is, sequential access is much faster than random access, we propose a Sequential Access detection Based cachE Replacement algorithm called SABER, which obtains LBA of cache blocks to detect sequential blocks for making a replacement decision. SABER can reduce the number of random disk access induced by cache miss, to avoid unnecessary seek and rotation, so it can achieve high performance by reducing miss penalty in networked storage servers which have weak locality.2. Most of cache replacement policies work at the block level, so file system metadata is totally transparent to the cache. In fact, because files are stored and accessed sequentially in most storage systems, metadata can help to identify sequentiality of cache blocks. We present a FIle-system Metadata Aware cache Replacement algorithm called FIMAR, in which inode is attached to each cache blocks for estimating sequentiality. FIMAR assigns blocks of large inode high priority to be discarded, so that disks can work under sequential mode as much as possible.3. To eliminate data redundancy among cache hierarchy in the networked storage systems, traditional cache replacement policies often introduce a dedicated communication protocol, and can not take into account both the client-friendly and efficiency. We propose a REsident Distance based cache replacement algorithm for exclusive storage servers called RED, which tracks the contents of client caches by obtaining semantic information from standard I/O interface, and makes a replacement decision using the time a block stays in the client cache, so it can attain high performance without changes to clients.4. In the networked storage systems, data correlation of multi-client applications seriously affects server cache performance. The traditional cache replacement policies are usually suitable to either low data correlation applications or high data correlation applications, but not both. We present a Multi-client Adaptive COoperative cache Replacement algorithm called MACOR, which enforces exclusive caching among different cache levels and cooperative caching at the same level, so it can dynamically adapt to changes of data correlation. MACOR also proposes a dynamic cache partition mechanism to solve server cache conflict problems induced by multi-clients.
Keywords/Search Tags:networked storage, caching mechanism, replacement policy, exclusive caching, inclusive caching, miss penalty, average access time
PDF Full Text Request
Related items