Font Size: a A A

On efficiency improving and energy saving in data caches

Posted on:2011-06-12Degree:Ph.DType:Dissertation
University:Santa Clara UniversityCandidate:Subha, SrinivasanFull Text:PDF
GTID:1448390002453203Subject:Engineering
Abstract/Summary:
This work presents a new hybrid cache that consists of a direct mapped cache and a fully associative cache. A data array with any dimension is transformed to a one dimensional data array where data are rearranged in the order they will be referenced. Reference patterns of data arrays in a loop are considered to minimize the cache misses by labeling each block in the cache with a parameter: the maximum iteration number or block_max where that block is accessed. A new replacement policy is suggested for this hybrid cache based on block_max that prevents preemption of blocks to a certain extent. A performance improvement of 89% in average memory access time was observed over the conventional direct mapped cache of the same size.;This work proposes an algorithm to determine the variable block size for variables in a program at some predetermined points, called decision points, based on their access pattern. The whole program is divided into segments by the decision points. Rules to decide the decision points are developed. The algorithm identifies the decision points, formulates the optimization function to determine the average memory access time for the variables involved at these decision points. Solving the optimization function with constraints gives then the optimal block size. A performance improvement of 64% is observed for matrices of size six. The proposed model is compared with pre-fetching and is seen to show better results.;This work proposes a method to save energy in set associative caches. The method collects the time of access of each memory address by profiling. Additional information about next access to a way is maintained in the cache ways. All the ways of the cache are put in either disable mode or low energy mode as supported by the cache. At each time unit, the cache ways are searched enabling the way that is going to be accessed next. If no way is going to be accessed in the next time unit, the generated address is placed respecting the replacement algorithm in the cache using the address mapping function. During this mapping all the ways of the mapped set are enabled as in a traditional set associative cache. An average energy savings of 63% and performance improvement of 14% over way-prediction cache was observed.;A fully associative cache with modified address translation using XOR functions is proposed next. This work compares the performance of this model with direct, set associative and fully associative memory of same size and analyzes the energy consumption in the proposed model. Expressions for the average memory access time for the proposed model are stated. The energy consumption in the proposed model is compared with the direct, set associative and fully associative cache of the same size and conditions for outperforming them are derived. Simulations are done with the SPEC 2000 benchmark. The performance with respect to the average memory access time is found to be equal to direct, set associative, fully associative memory of the same size for the chosen parameters. The energy consumption is comparable with a set associative cache of the same size and ways. An improvement in energy consumption of 99% is seen in the case of fully associative memory of the same size.;This dissertation proposes an algorithm for the buffer cache management with pre-fetching. The proposed algorithm is compared with the Waiting Room and Weighing Room (W2R) algorithm for sequential and random input. For sequential input, the performance is comparable with W2R algorithm. For random input, the proposed algorithm performs better by 9%.;This dissertation proposes a new cache type which consists of both kinds of caches. Initially the entire cache system behaves like an exclusive cache but changes with the reuse of the cache block/way to an inclusive behavior with the reused block/way. When a new block is fetched into the cache, the corresponding way is reset to an exclusive way. On a reuse of a block in a level one cache, it is made inclusive. Conditions when this model outperforms traditional inclusive cache are derived. Performance improvements of 66% over inclusive cache are observed. (Abstract shortened by UMI.)...
Keywords/Search Tags:Cache, Fully associative, Energy, Average memory access time, Data, Performance, Same size, Model
Related items