Font Size: a A A

Scalable Single-image File System

Posted on:2000-08-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:J Y WangFull Text:PDF
GTID:1118360185495552Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Traditional distributed file systems can't provide clusters with strict single-system image, and because of failing to keep up with the trends in computing technology, they can't meet the cluster applications' requirements either, such as I/O performance, scalability, and availability. Dawning super-server is a typical cluster system, we have developed COSMOS file system for it, and call its prototype file system S2FS, an acronym for a Scalable Single-image File System. Mainly presented in this dissertation are S2FS's design, implementation, and evaluation.First. S2FS is a global file system. In order to maintain a strict single-system image, it provides location transparency and strong UNIX file-sharing semantics. Being lack of AIX operating system's source code, we can still add S2FS into AIX seamlessly at the Vnode/VFS interface so mat S2FS maintains ABI/API compliance with UNIX file system, thus demonstrating that Virtual File System is an effective mechanism to achieve this objective.Further, this dissertation highlights the research and evaluation of cooperative caching used to improve S2FS's performance and scalability. After a sufficient condition of the deadlock-free design has been given, the directory-based invalidate cache coherence protocol is introduced and its cache coherence is verified using belief. Then we propose the dual-granularity cache coherence protocol as a way to further improve the system performance, and devise a hint-based heuristic cooperative caching algorithm under dual-granularity protocol. The analytical models are established for both heuristic algorithm and the state-of-the-art N-Chance algorithm, the analytical results show that the heuristic algorithm can effectively reduce the I/O response time compared with N-chance algorithm almost in each case.Finally, in order to eliminate central file server bottleneck found in traditional file systems. S2FS splits the traditional server's functionality into two separate pieces: data storage and metadata management, and distributes them among cooperating networked machines respectively. The metadata management server, which we call manager, is responsible for storing and maintaining system metadata(including file inodes and superblook), and it also records the data location in the clients' caches so as to preserve cooperative cache coherence. The storage server implements network disk stripping...
Keywords/Search Tags:Single-image
PDF Full Text Request
Related items