Font Size: a A A

Research On Block-Level Cache Prefetching Optimization Based On Deep Learning

Posted on:2020-01-26Degree:MasterType:Thesis
Country:ChinaCandidate:X ShiFull Text:PDF
GTID:2428330590483180Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Caching and prefetching are important tools to improve the performance of your storage system.A good prefetch algorithm can greatly improve the cache hit rate,thereby improving the performance of the storage system.As is well known,the correlation of blocks can be used for cache prefetch optimization.In recent years,with the wide application of deep learning,some scholars have tried to explore block correlation through deep learning.However,the current correlation work based on deep learning mining storage blocks only proves that deep learning can effectively mine the correlation of blocks,and does not actually use it for cache prefetching.The main reason is that it is quite difficult to find the most relevant blocks from a large number of blocks during prefetching.How to mine the correlation of blocks through deep learning and use it for cache prefetch optimization will be very meaningful work.Based on the research of block correlation,the technology IO2 Vec for mining IO correlation is proposed for cache prefetching.Based on this,LSTM-based Seq2 Seq is used for IO sequence prediction.Finally,the sequential prediction is combined with the IO sequence prediction to form a deep learning based prefetch algorithm SL.Experiments show that the SL cache prefetch algorithm has a significant effect on improving the cache hit ratio.In different cache replacement strategies(LRU,FIFO,LIRS,ARC)and different cache spaces,the SL cache prefetch algorithm is used to increase the hit rate by an average of 10%-30% compared to the cache without prefetching;There is also a 3%-10%improvement in the hit rate using the sequential prefetch algorithm.In addition,by analyzing the IO data,a model training method based on the reuse of missing IO is explored.Through experiments,it is found that the time based on the reuse of the missing IO training model is only than the original model,but theprefetching effect is basically unaffected.
Keywords/Search Tags:Cache Prefetching, Deep Learning, Correlation, Sequence Prediction
PDF Full Text Request
Related items