Font Size: a A A

A high-bandwidth memory pipeline for wide issue processors

Posted on:2003-11-30Degree:Ph.DType:Dissertation
University:University of MinnesotaCandidate:Cho, SangyeunFull Text:PDF
GTID:1468390011479220Subject:Computer Science
Abstract/Summary:
Providing adequate data bandwidth is extremely important for a future wide-issue processor to achieve its full performance potential. Adding a large number of ports to a data cache, however, becomes increasingly inefficient and can increase the hardware complexity significantly. As a solution to this problem, a memory pipeline coupled with multiple separate cache banks is reviewed and analyzed. Then we propose data decoupling which classifies memory instructions early in pipeline and steers them into different cache banks. Moreover, we study an interesting yet less explored behavior of memory access instructions, called access region locality that concerns with each static memory instruction and its range of access locations at run time. Our experimental study using a set of SPEC95 benchmark programs shows that most memory access instructions reference a single region at run time. Also shown is that it is possible to accurately predict the access region of a memory instruction at run time by scrutinizing the addressing mode of the instruction and the past access history of it. We describe and evaluate a wide-issue superscalar processor with two distinct sets of memory pipelines and caches, driven by an access region predictor. Experimental results indicate that the proposed mechanism is very effective in providing high memory bandwidth to the processor, resulting in comparable or better performance than a conventional memory design with a heavily multi-ported data cache that can lead to much higher hardware complexity.
Keywords/Search Tags:Memory, Processor, Data, Pipeline, Cache
Related items