Font Size: a A A

Distributed scheduling of remote processors in SCI-local area multiprocessors

Posted on:1997-11-16Degree:Ph.DType:Dissertation
University:Wayne State UniversityCandidate:Agasaveeran, SaravananFull Text:PDF
GTID:1468390014981065Subject:Engineering
Abstract/Summary:
Local Area MultiProcessors(LAMP) is a network of workstations with shared physical memory. Opposed to the traditional Local Area Networks(LAN), LAMP is an attempt to provide and utilize physically shared memory between workstations to achieve higher performance. Scalable Coherent Interface(SCI) (IEEE Standard 1596-1992) is a technology which can provide physically distributed shared memory and cache coherence among workstations across a high bandwidth, low latency network. SCI-LAMP is an architecture where a number of personal workstations are connected by the SCI protocol. Even though SCI-LAMP is more tightly coupled than the LANs due to the shared physical memory, it is still distributed in nature compared to the traditional bus based shared-memory multiprocessors. The operating system for SCI-LAMP needs to be fully distributed yet provide its users with a single global view of the system and support efficient and fair sharing of the unused physically distributed remote processors.;We present a distributed decay-usage scheduling algorithm which exploits the distributed shared memory to collect and distribute the global state information. This algorithm schedules the available idle remote processors among the requesting workstations in the system according to their static base priorities and their past usage of remote processors. Each requesting workstation may be allocated zero to many remote processors based on its priority as well as its demand for remote processors.;We analytically model the proposed parallel decay-usage scheduler and present algorithms to compute the steady state allocation of remote processors to the requesting workstations in three different cases namely (1) varying demand, same base priority, (2) same demand, varying priority, and (3) varying demand, varying priority. The steady state performance of the algorithm is evaluated in scheduling both sequential and parallel jobs using simulation. It is found that the algorithm maintains fairness of allocation of remote processors with a low overhead and that the higher priority workstations experience significantly faster job response times and higher speedups than that of the lower priority workstations at moderately high loads. Low overhead due to the use of distributed shared memory and the high performance network, enables finer granularity level remote processors sharing in SCI-LAMP than in traditional LANs.
Keywords/Search Tags:Remote processors, Shared, Memory, Distributed, Area, Workstations, SCI-LAMP, Network
Related items