Font Size: a A A

Load balancing strategies for parallel architectures

Posted on:2004-11-16Degree:Ph.DType:Dissertation
University:The University of Texas at AustinCandidate:Iqbal, SaeedFull Text:PDF
GTID:1458390011957266Subject:Engineering
Abstract/Summary:
This research explores some aspects of load balancing on parallel architectures. Load-balancing algorithms are used to partition large tasks for parallel computation. Several researchers have proposed static and dynamic load-balancing algorithms to improve parallel runtime and scalability of parallel computations. In this study, the partitioning quality and partitioning runtime trade-off of common geometric and multilevel load balancing algorithms is investigated. A sensitivity analysis of static load-balancing algorithms to design parameters is also conducted. Certain dynamic load-balancing algorithms are applicable to a broad class of computations, while others are more application specific (e.g., in data mining or to data-bases). This dissertation explores aspects concerning partitioning quality, runtime, and sensitivity of dynamic load balancing in the context of parallel scientific computations involving adaptive mesh refinement (AMR). In modern AMR computations, the problem size varies dynamically during the simulation. Efficient dynamic load balancing, combined with dynamically changing the number of processors at runtime, can improve resource utilization in such computations. This dissertation proposes and evaluates an adaptive approach for changing the number of processors during the lifetime of a simulation. This approach adds a new aspect to the dynamic load-balancing algorithm design space. The present study also explores the trade-off between partitioning quality, partitioning time, and data movement of various dynamic load balancing algorithms. The effect of dynamically varying processors on cluster performance is also investigated. Our simulations show that it is advantageous to vary the number of processors at runtime for parallel computations where there is a large variation in problem size. Further, for computations with monotonically increasing problem size encountered in AMR, varying the number of processors can result in faster run-times, in addition to a substantial reduction in resource consumption. Finally, a detailed analysis of the effect of computation features (degrees of freedom, preconditioning) and parallel architectural features (network speed, processor speed) on parallel runtime and resource consumption is done to gain further insight.
Keywords/Search Tags:Parallel, Load balancing, Runtime
Related items