Font Size: a A A

Scheduling and optimization of parallel programs for message passing architectures

Posted on:2002-04-03Degree:Ph.DType:Dissertation
University:State University of New York at BinghamtonCandidate:Shafer, Stephen RayFull Text:PDF
GTID:1468390011498030Subject:Computer Science
Abstract/Summary:
Even with recent advances in computer architecture, there are still numerous problems in computer science today that require much more processing power than a single-CPU system can deal with in a reasonable amount of time. Parallel processing can provide the power to deal with these so-called “grand challenge” problems. A program may be parallelized by partitioning it into subsets of either the data (data parallel) or the processing (task parallel). For task parallel processing, the tasks need to be “scheduled” to processors. This can be done by hand, but is rather difficult and error prone, especially with a large number of tasks. To aid in producing efficient code, a static scheduler, which uses fixed estimates for the computation and communication times, can be used to automatically determine the processor assignment for each task. Heuristics such as list scheduling and clustering have been used to deal with the large number of permutations of processor allocation and execution order for the tasks. A new scheduling technique is described which uses concepts from both list scheduling and clustering. Results comparing the new path-based clustering scheduler against two other current schedulers show that it produces more efficient schedules for a variety of randomly generated task graphs, as well as faster-running application code on a cluster of workstations.; Even after a schedule has been generated, there are still opportunities for reducing the execution time of a program even further. Three different ways of doing this are examined. First, the task execution order and/or message dispatch order in a schedule can be reorganized. A second optimization method is message merging, where multiple inter-processor messages are combined into a single message. Finally, individual tasks may be duplicated so that they execute on more than one processor. The optimizations were also tested with task graphs of varying sizes and characteristics, as well as with application code, and are shown to reduce the execution time of the schedules even further.
Keywords/Search Tags:Parallel, Scheduling, Message, Execution
Related items