Font Size: a A A

Design And Implementation Of Network Ramdisk Based On Fast Networks

Posted on:2004-06-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Q WangFull Text:PDF
GTID:1118360152457228Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Recently data intensive applications have been focused as one of the most important applications for high performance computing. Because of their enormous usage of main memory, it is not surprising to find that they frequently swap out/in their real memory contents to/from swap space during the execution. Conventional virtual memory operating systems use magnetic disks for backing storage. Magnetic disks provide high data transfer rates, large storage capacity, and the ability to randomly access data, making them an appealing backing storage medium. However, the average seek time of a magnetic disk is several orders of magnitude slower than memory access time.Recent advances in CPU speed, network bandwidth, and memory size have made using idle memory in networked workstations as backing storage with improved performance and greater functionality feasible. We propose a novel prediction-based prefetching framework named PNMS(Prediction-based Network Memory System) to leverage the memory resources in network. Each memory server machine is capable of supporting heterogeneous client machines executing a wide variety of operating systems. Clients that exceed the capacity of their local memory access remote memory servers across a high-speed network to obtain additional storage space. By distributing program for prediction and applications into different workstations, prefetching non-resident pages into a target node's memory by an idle node can reduce I/O stall further.We present a kernel-level reliable message passing protocol,which can provide low-latency, low-overhead kernel-to-kernel message passing.lt coexists with other traditional protocols and shares the same communication channel.A reliability policy is presented and evaluated.lt is resilient to single workstation failure.We study the mechanisms to produce smarter algorithms, and present an aggressive Markov algorithm built on page fault. It is based on usual PPM(Prediction by Partial Matching), but has a consideration of typical application behaviors, such as sequence and stride. We also discuss the different algorithms we have chosen and the policies and mechanisms used to control the quality of predictions.Of the different PPM models it is generally found that higher-order PPM models display high predictive accuracies. However higher order models are also extremely complicated due to their large number of states that increases their space and runtime requirements. This paper presents a new method to simplifying multi-order PPM model by pruning regular references. It uses one node to contain sequential references, instead of one for each reference, thus can not only reduce the number of states, but also help to improve the performance of predictions.In a large system, complete information collection can result in much overhead on CPU and network. Considering the features of inaccuracy and partiality of load information in cluster system, we propose a centralized smallest k-subset random algorithm to select candidates, and design an information Cache. If we adopt multiple centralized servers, our system will get more scalability and fault tolerant. The special replacement algorithm of this information Cache assures that the nodes in the Cache are the k most underloaded nodes in the system.We present logical memory server, an abstract of network memory server. It is transparent to client and server architectures, uses efficient algorithms and data structures to retrieve data, on average, in constant time. And also, we give an algorithm about idle memory advertising with server-first.This thesis also describes the design and implementation of a prototype system. It modifies the kernel of Linux, calling our new functions when swaping in/out. Measurements obtained from the prototype implementation clearly demonstrate the viability of systems based on the model. It can speed the execution of both CPU-intensive and memory-intensive applications.
Keywords/Search Tags:network memory, Prediction by Partial Matching, Markov, memory hierarchies, network of workstations, idle resource, reliability, kernel level communication
PDF Full Text Request
Related items