Font Size: a A A

The Research On Memory Level Parallelism And Data Cache Design And Verification Based On EPIC Architecture

Posted on:2005-12-02Degree:MasterType:Thesis
Country:ChinaCandidate:R HeFull Text:PDF
GTID:2168360155471835Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the improvement of the microprocessor's Parallel technology and the development of the manufacture process, the processor can execute more instructions within less time. So, how to provide enough and continuous data stream for several parallel instruction streams and hide memory latency become the key to increase the performance of microprocessors. The Explicitly Parallel Instruction Computing (EPIC) architecture combines the advantages of the superscalar and VLIW technologies. Through the communication between the compliers and the hardware, it improves the performance of the processor. Bases on the EPIC architecture, this paper brings forward a design of data Cache component in the Back-End pipeline, which increases the efficiency of the data fetching and latency hiding by using the ambiguous memory address checking technology in run time and multi-port memory pipeline parallel technology.In this paper we first study EPIC and the IA-64 Instruction system. On the basis of it, study and explain the challenge of this architecture to memory sub-system, we present a solution point to this challenge, then design the memory hierarchy to support Optimized Lockstep Model. Secondly, we analyze the ambiguous memory address checking technology in run time. By this technology, reference latency of load instruction is minimized through the support of compiler. Then on the basis of the analysis to Back-End pipeline and the memory hierarchy support to memory level parallelism, we present a design of multi-port memory pipeline parallel data Cache base on EPIC architecture. And point to memory dependence and memory ordering which happen in multi-port memory frequently, we analyze it carefully and present a unique way to solve this two problems respectively, and mix these technologies into the logic design of L1 data Cache. Finally, the results of the simulation and the time delay analysis are given which prove that the function of the logics we design is correct and the time delay is within that we have desired.The key technologies in this paper which includes: the ambiguous memory address checking technology in run time, new multi-port memory architecture which can mostly develop instruction level parallelism and the model of solution of memory dependence and memory ordering based on this architecture have all been implemented in X processor which is the first 64 bits high performance processor based on EPIC architecture.
Keywords/Search Tags:Back-End pipeline, L1 data Cache, EPIC, data speculation, memory dependency, memory ordering
PDF Full Text Request
Related items