| With the rapid development of information and intelligence in our daily life,massive data are generated all the time,and the requirements for storage system are constantly improving.Because of low price,fast reading and writing speed and large capacity,NAND flash memory is becoming the mainstream storage medium,and is widely used in mobile phone,tablet,TV,vehicle computer central control and other intelligent terminal platforms.NAND flash memory has some inherent characteristics: erasure before writing,out-of-place update,limited erasure numbers,and asymmetric I/O operation,so the traditional hard disk storage management method is not applicable for NAND flash memory.Consequently,it is necessary to be improved according to the characteristics of NAND flash memory.This thesis mainly studies garbage collection and buffer management based on NAND flash memory.Based on existing algorithms,new adaptive workload garbage collection algorithm and adaptive workload buffer management algorithm are proposed.Through comparison experiments and data analysis,the effectiveness of the algorithms is verified.Through the analysis of the existing garbage collection algorithm,an adaptive workload garbage collection algorithm AFa GC(Another File aware Garbage Collection)is proposed.(1)Two reclaim block selection strategies are proposed and dynamically adjusts the switching threshold according to the workload to improve the efficiency of garbage collection and wear leveling.(2)Then a new calculation method of the heat degree of the logical page is also proposed,which can calculate the heat directly by using the number of logical pages updates and the difference of sequence number to avoid the weight problem of the historical heat and the current update frequency.(3)To reflect the changing characteristics of workload,the dynamic threshold based on physical block erasure numbers is set in this thesis.(4)The effectiveness of the algorithm is verified on Linux system and QEMU platform.The experimental results in multiple datasets show that compared with GR,CB,CAT,FAGC and Fa GC+ algorithms,the AFa GC algorithm has improved in terms of physical block erasure numbers,data page copy numbers,erasure numbers standard deviation.In view of the problems of poor universality of existing NAND flash buffer management algorithms and the long-term use of buffer by old data,a new adaptive workload buffer management algorithm WD-LRU(Workload Dependency First Recently Used)is proposed.(1)WD-LRU algorithm records the age of data pages based on access sequence number,calculates the comprehensive heat of data pages by using the product of access sequence number and access times,and compares the comprehensive heat with global sequence number to determine the cold and hot data pages.(2)In this thesis,data in the buffer are divided into four categories: cold clean,cold dirty,hot clean and hot dirty.One queue management is used to manage the cold clean page separately,the other queue is used for other types of pages.(3)WD-LRU has two page replacement strategies,and the switching threshold of cold clean priority policy and retention policy is dynamically adjusted according to workload.Cold clean page replacement is preferred for cold clean priority policy.If the cold clean queue is empty,recalculate the comprehensive heat of the mixed queue data page and migrate to the appropriate location.The policy of reservation is to select the data page replacement with the least comprehensive heat in the mixed queue.The mixed strategy can improve the hit rate of buffer,preventing the cold clean pages from being replaced prematurely and the old data occupying the buffer for a long time,which is adaptable to different workload.(4)Experiments running on Linux system and QEMU platform are designed to verify the effectiveness of the proposed algorithm.The experimental results in multiple datasets show that compared with LRU,CFLRU,LRU-WSR,CCFLRU and LLRU,WD-LRU algorithm in multiple data sets has improved cache hit rate,reduced the number of write physical flash and system running time. |