Font Size: a A A

Impact of shared memory and distributed memory platforms on the design and performance of parallel evolutionary algorithms

Posted on:2003-11-15Degree:Ph.DType:Dissertation
University:The University of MississippiCandidate:James, Tabitha LynnFull Text:PDF
GTID:1468390011484625Subject:Operations Research
Abstract/Summary:
Parallel computing can be defined as the use of multiple processors to provide enhanced algorithmic performance. Problems in the realm of optimization have benefited greatly from parallel computing. Heuristic algorithms lend themselves nicely to parallelization, which along with the fact that many problem classes in operations research provide large problem instances that require a great deal of computational resources, make them attractive avenues for research.; The variety in the development of parallel platforms gives users a choice of platform on which to develop applications. The advantages or disadvantages of platform choice have largely been ignored in favor of model development and due to limited availability of expensive computing platforms. However, the choice of platform can have a dramatic affect on the design and performance of the algorithms developed. This study explores the impact of platform choice on the development parallel evolutionary algorithms.; Specifically, this research answers the following questions: (1) Does the platform utilized impact the design of parallel evolutionary algorithms? (2) Does the platform utilized impact the performance of parallel evolutionary algorithms on different problem types?; The first question is answered by the development of a classification table that allows for a descriptive and uniform representation of parallel platforms. This classification table provides a summary of key benefits and drawbacks associated with the different platforms discussed in this paper which could be used in other experiments to provide a clear overview of the parallel platform characteristics. The second question is answered by evaluating the performance of the algorithms developed on several widely used test sets of different problem types, including DeJong's test set, a test set of quadratic assignment problems, and a test set of set covering problems.; This study uses two popular parallel architectures, a CC-NUMA SGI Origin 2000 and a distributed memory Linux cluster. Two programming environments were used: an MPI implementation on the cluster and OpenMP on the Origin. Parallel versions of both the genetic algorithm and the scatter search algorithm were developed for both platforms. The results show that platform choice has a significant impact on the design and performance of parallel evolutionary algorithms.
Keywords/Search Tags:Parallel, Performance, Platform, Impact, Memory, Problem
Related items