Font Size: a A A

Research And Design Based On RISC Cache

Posted on:2018-01-18Degree:MasterType:Thesis
Country:ChinaCandidate:K M MaFull Text:PDF
GTID:2348330542952431Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the development of semiconductor technology,the speed gap between CPU and Memory is becoming more and more obvious.In order to alleviate the gap,Cache is introduced into CPU and Memory.This creates Memory hierarchy consisting of different speeds and different sizes of Memory.Cache is an indispensable component in the processor chip,occuping a large proportion of the area,and the closer to the processor,the smaller the capacity,the faster the cost per byte.So its performance is crucial for CPU,its capacity and speed has become the most important indicator to measure the performance of microprocessor.In this paper,the key technologies of Cache system are studied,and the Cache system of RISC architecture CPU is designed and verified.The research in this paper is as follows:Firstly,with the full investigates the structure of the Cache system and the related control strategy used in the company,this paper analyzes the influence of mapping mode,the degree of association,the size of Block and the capacity of Cache on the performance of CPU.Secondly,the design scheme was presented.The practical application of structure and control strategy is taken into account as follow: 1.Compared with directly associated and fully associative,the Cache used 4 way set associative has obvious advantages.2.Choosing the capacity of 32 KB,the hit rate of CPU without too much influence on access speed was be ensured.3.Choosing two configurable write strategies,Cache system become more flexible.4.Compared with LRU algorithm,the design selected PLRU algorithm has the lower logical resource.After determining the design indicators,Tag unit,Data unit,State unit,State Machine,etc.were designed.Thirdly,with introduced virtual storage and Cache SRAM structure,conversion mechanism between virtual address and physical address is needed.TLB module was designed in this paper,implemented the VIPT parallel processing so that the conversion of virtual address can be completed within one Cycle.In addition,due to the limitation of the pipeline efficiency,the continuous data write back operation will block the pipeline.Store Buffer was designed with FIFO between Dcache and Memory,makes the write back operation takes only two Cycles.Performance of CPU is greatly improved.Fourth,after completed the whole access storage function of instruction pipelining,the simulation was made use Modelsim and other professional ASIC design tools.Simulation and verification was be used.The verification shows that CPU can access storage through Cache system,instruction and data operations under hit or miss and relevant operations could be complete correctly,the same as special instructions.At the same time,TLB can complete address translation function quickly.Store Buffer can complete Dcache data write operation without blocking the pipeline.In this paper,the Cache system and all storage functions implemented can be accessed through CPU correctly,and have good performance,provides technical reference on the research for RISC architecture CPU.
Keywords/Search Tags:Micro-processor, Cache, TLB, Design, Simulation and Verification
PDF Full Text Request
Related items