Font Size: a A A

A Quantitative Analysis Of Memory Level Parallelism And Cache Prefetching On Multi-core Processors With Multi-level Caches

Posted on:2021-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:Y H YanFull Text:PDF
GTID:2518306557490084Subject:IC Engineering
Abstract/Summary:PDF Full Text Request
The number of cache misses hold by the Miss Status Handling Registers(MSHRs)is defined as Memory Level Parallelism(MLP),which is an indispensable factor to estimate cache performance.Unfortunately,due to the complexityof tracing all the dependence paths in the profiled instruction windows,previous works on MLP modeling are very time-consuming.Furthermore,previous works only model MLP in single-core processors with single-level caches.Meanwhile,MSHR can also hold cache prefetch requests,which can significantly decrease cache miss rates.However,the impact of this mechanism is normally ignored in previous works on cache miss rate modeling.This thesis focuses on the analytical modeling of MLP and the prefetching influences on cache miss rates,and carries out a case study of design space exploration on MSHRs.The main contributions of the thesis are as follows.Firstly,in terms of MLP modeling,for single-core architecuture,a probabilistic model to estimate the cache miss path length.The time overhead of the MLP modeling process is effectively decreased owing to the probabilistic model.For multi-core architecture,considering the necessary conditions of MLP,an analytical MLP model for multi-core processors with multi-level caches is proposed and the time overhead of modeling is significantly decreased compared to previous ANN model.Also,more insights for MLP are provided.Secondly,considering both the decreasement and increasement of cache misses due to prefetching,the effects of cache prefetching is modeled.More accurate cache miss rates analytical model for processors equipped with prefetching is established thanks to that.In this thesis,Spec 2006 benchmark suite is used to evaluate the error and time overhaed of the models.Compared to Gem5 cycle-accurate simulations,the average error of the MLP model in single-core processors with single-level caches is about 8%,which is slightly larger than previous works.However,the time overhead of the proposedmodel is decreased by 50% compared with previous works.Meanwhile,the average error of the MLP model in dual-core and quad-coreprocessors with multi-level caches is about 10.3% and11.5% respectively.The average abosulute error of the cache miss rate model in single-core processors with single-level cache is 0.875%,which is 49.7% of K Ji's error(for LRU Caches)and Stat Cache's error(for Random Caches).The average abosulute error of the cache miss rate model in dual-core and quad-core processors with multi-level caches is 6.65% and 8.89% respectively,which is 58.1% and 61.2% of StatCC's error.
Keywords/Search Tags:Memory Level Parallelism, Cache prefetching, Cache miss rate, Analytical Model, Miss Status Handling Registers
PDF Full Text Request
Related items