Font Size: a A A

Theoretical Design And Algorithm Research For NAND Flash SSD With High-reliability And High-performance

Posted on:2014-12-24Degree:MasterType:Thesis
Country:ChinaCandidate:B R LuoFull Text:PDF
GTID:2268330401959128Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
There are many kinds of SSD(Solid State Disk) design in the world and they usingdifferent key technologies and algorithms. But there are two shortcomings of existing SSD:some design of SSD rely too much on the ability of basic chips and lack of consideration ofFTL(Flash translation layer) algorithm design, resulting SSD read and write performancereduced significantly under different data storage application; Some design of SSD lack ofproper data reliability mechanism resulting SSD cannot well adapt to enterprise-class datastorage requiements.This paper is based on the requirements analysis. Firstly we analyze andcomplete SSD overall design and system design which require SSD having high performanceand high reliability. Based on the design, we proposed two algorithm HUP-LRU and PPCRAID-5and at the same time we choose GC(Garbage Collection) algorithm and WL(WearLeveling) algorithm to complete the construction of core algorithm of this paper. And thentwo types of test results are given, one of them is simulation test results which is based onwidely used DiskSim simulator and traces, the other is based on prototype test usingcommercial Benchmark test platform. Finally, we discuss further SSD design ideas. Thispaper mainly includes:1) Considering the features of Flash and general requirements of high performance andhigh reliability, we group the SSD design into three general objectives based on thecore design requirement and accordingly complete SSD overall design and systemdesign.2) In order to solve shortcoming of the traditional FTL layer page mapping algorithmfor frequent updated data, we proposed a new improved algorithm called HUP-LRUin write requests queuing mechanism in write buffer. The algorithm use statisticaldata representing history buffer access feature to analyze the frequency of access andthe―recency‖to choose block with minimum write amplification and lowest dataaccess. Therefore we can ensure the minimum value of wearing and maximum valueof writing effectiveness and at the same time we can ensure the durability and IOPSperformance.3) Although the traditional RAID-5(Redundant Arrays of Inexpensive Disks5) mechanisms provides high data reliability, but it did not take the limited number oferase of Flash chips into account, we proposes a new the RR (RAID Recovery)algorithm, using PPC (Partial Parity Cache) tables in double data rate synchronousdynamic random access memory DDR (Double Data Rate SDRAM). We cansignificantly reduce the number of erase, while ensuring the RAID’s efficiency andimproving durability so that we can guarantee using SSD instead of HDD incorporate mass data storage.4) We complete a laboratory simulation testing to gain performance evaluation, as wellas a prototype commercial verification test. Finally, we give some advice for furtherimprovement of SSD application.
Keywords/Search Tags:High-reliability
PDF Full Text Request
Related items