Font Size: a A A

Classification-based Prefetch-Aware Cache Partition Mechanism

Posted on:2022-02-28Degree:MasterType:Thesis
Country:ChinaCandidate:L L ChenFull Text:PDF
GTID:2518306560954959Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Prefetching technology can effectively hide the memory access delay by getting the data required by the processor into the cache in advance,but prefetching may bring negative effects on the normal cache access.In addition,with the development of processors from single-core processors to multi-core processors,multiple cores compete for the last level to share the cache,and cache contention brings great challenges to prefetch technology.For a diverse working set of applications,how to use prefetching to hide memory access latency and reduce inter-core interference caused by shared cache contention becomes a hot issue.An application can be classified as memory intensive application or non-memory intensive application according to how often it issues memory instructions during its operation.However,the existing prefetch mechanism and cache partitioning mechanism do not consider the characteristics of non-memory intensive applications,and cannot improve their performance.Therefore,this paper proposes two mechanisms based on the characteristics of non-memory-intensive applications,which not only ensure the performance of memory-intensive programs not to be affected,but also improve the performance of non-memory-intensive applications.Since non-memory-intensive applicationsissue far fewer access requests than other programs,hardware prefetching is difficult to find the connection between requests.Blind configuration and aggressive prefetching will lead to cache pollution and affect the performance of other programs.Therefore,in order to improve the performance of non-memory intensive programs,this paper proposes two mechanisms:(1)In view of the characteristics of less access instructions and weak locality issued by non-memory-intensive applications,the dynamic adjustment prefetch based on classification is proposed.This approach dynamically adjusts the degree of aggressiveness of prefetching by monitoring application characteristics,such as prefetching accuracy,in real time at run time.For most programs,as long as the prefetching accuracy is greater than the preset threshold,the prefetching radicality of the current program is increased.However,for non-memory intensive programs,the adjustment of prefetch radicality is more tightly controlled in order to prevent overly aggressive prefetch configurations from crowding other potentially useful data blocks out of the Cache.In addition,the mechanism is also made clear when to open the prefetching,if prefetching is closed,but the performance is still in a downward trend,there is a strong possibility,this is because too much memory access delay caused by the performance degradation,at this time to open the prefetching,timely,taking advantage of the prefetching,try to reduce the application's delay,to avoid further loss in performance.(2)Considering that non-memory intensive applications are easily disturbed by other programs,a classification-based prefetch perceptual cache partition is proposed.This method dynamically selects the most appropriate cache partitioning method by monitoring the type of application running in the multicore processor in real time.When there are non-memory intensive programs in a multicore processor,a partition is set off in the shared cache to avoid interference from other programs.In addition,the mechanism also takes into account the characteristics of prefetch-friendly applications.Since such programs can obtain a large performance improvement through prefetching,the mechanism will also set a partition in the shared cache to avoid interference from other programs when there are prefetch-friendly applications in the processor.When these two types of programs do not exist in a multi-core system,these two types of partitions are remerged into the shared cache to avoid performance degradation caused by the reduction of effective cache capacity.This mechanism not only makes up for the performance degradation caused by the neglect of non-memory intensive applications by prefetching mechanism,but also ensures the performance advantage of prefetch-friendly applications.
Keywords/Search Tags:Cache, Multi-core processor, Shared cache, Hardware prefetching, Cache partitioning
PDF Full Text Request
Related items