Font Size: a A A

Caching On Flash Memory

Posted on:2017-03-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:S HuangFull Text:PDF
GTID:1318330485450838Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
The performance gap between storage and memory is one of the major bottlenecks in computer systems. The rise of flash memory fills that gap and brings new opportunity to solve the problem. Due to the higher cost and lower density of flash memory, fully solid-state systems which solely rely on flash for permanent data storage are rarely seen. On the other hand, hybrid storage systems consisting of flash and HDDs have been widely deployed. Caching is the most straightforward and popular schema for building such systems. How-ever, existing cache management policies are mainly designed for in-memory cache. They are agnostic to the characteristics of flash and can hardly make the most of it. In this thesis, different solutions are proposed to address the performance and endurance issues when using flash for caching. These solutions includes:(1) A Lazy Adaptive Replacement Cache(LARC) algorithm which tries to reduce the SSD write operations incurred by cache replacement. LARC adopts the idea of selective caching and exploits the skewed popularity of data. By keeping seldom accessed data out of cache, LARC reduces the number of write operations issued to SSD without causing significant loss of hit ratio. It achieves even higher hit ratio than other algorithms in many cases since popular blocks can be kept in cache for a longer period of time. LARC is self-tunning and incurs little overhead. Extensive evaluation shows that LARC can significantly extend the lifetime of SSD while delivering competitive performance.(2) A PE-LRU algorithm which proactively evicts unpopular data from cache to alleviate the write amplification of SSD. The key idea is to delete unpopular data from SSD(with the TRIM command) as soon as possible. By carefully selecting the victims, this can possibly reduce the number of pages moved during garbage collection, without sacrificing hit ratio. PE-LRU partitions SSD into separated read and write regions and proactively evicts data from the two regions with different policies. It manages cached pages as clusters and evicts data on a per-cluster basis. This avoids the data fragmentation caused by eviction. Experi-mental results show that PE-LRU significantly improves the performance of SSD at the cost of a little hit ratio decline and thus achieves higher overall performance, especially for write intensive workloads.(3) A DMFC architecture to exploit the configurable density of dual-mode flash for higher performance. Dual-mode flash is the device which is capable of operating as ei-ther SLC or MLC. It offers the opportunity to tradeoff between capacity and performance. A Scalable Flash Storage abstraction layer is proposed to manage dual-mode flash. DMFC leverages the differentiated write interface of SFS to distribute read and write cache into MLC and SLC blocks.With the selective retention mechanism, DMFC persistently stores dirty data and frequently retrieved data on flash, while allows other data to be evicted by SFS during garbage collection to reduce GC overhead. Experimental results show that DMFC slightly decreases the hit ratio but drastically improves the performance of the cache device, thus reducing the average response time of user I/O requests.
Keywords/Search Tags:Flash Memory, Solid State Drive, Hybrid Storage, Cache, Endurance
PDF Full Text Request
Related items