Font Size: a A A

On-chip FIFO cache for network I/O: A feasibility study

Posted on:2011-09-20Degree:M.SType:Thesis
University:State University of New York at BinghamtonCandidate:Chen, ShunfeiFull Text:PDF
GTID:2448390002455995Subject:Computer Science
Abstract/Summary:
The large performance gap between memory system performance and the processor continues to grow in spite of advances in process technologies and memory system architecture. This performance gap is bridged, in general, using a multi-level on-chip cache. Many network applications, particularly server applications use network data in a streaming fashion - the incoming and outgoing data from/to network interfaces are used at most once or twice. Consequently, when the packet contents are accessed via the cache hierarchy, much of these data items remain in the cache long after they are consumed. The resulting cache pollution deprives other applications from effectively using the cache and results in an overall degradation of the processor throughput.;This thesis proposes the use of a separate FIFO cache for holding incoming network data to avoid the pollution of the main processor caches. The proposed design exploits the fact that almost 100% of the incoming packets are accessed by the server within a very short duration after their arrival in a fairly FIFO fashion. The proposed FIFO cache for incoming data streams directly accepts data DMA-ed from the network interface card (NIC) and permits the processing cores to consume the incoming data directly from the FIFO cache. The FIFO includes additional mechanisms for looking up the data for any incoming packets and implements a replacement policy that effectively evicts data that is not accessed in FIFO order and data accessed in FIFO order after two consecutive accesses. The pro-active deletion of sequentially accessed data from the FIFO cache contrasts with the behavior of traditional caches, where data is evicted only when new data has to be brought into the cache. This pro-active deletion policy allows the size of the FIFO cache to be limited. We evaluate the proposed design using a cycle-accurate full system simulator that simulates the execution of the application OS and the networking protocol stacks, with the simulator providing accurate simulation models for the NIC, the DMA infrastructures, the memory system and a multicore processor. Our evaluations demonstrate that the proposed FIFO cache for incoming network data is capable of increasing the overall CPU performance dramatically by simply reducing the cache pollution caused by incoming network data streams.
Keywords/Search Tags:FIFO cache, Network, Data, Performance, Memory system, Processor
Related items