| Several kinds of parallel programs use distributed data structures of various topologies, from regular arrays to tree structures to more complex structures. Here we use algebraic theories and their morphisms to express the topology of data structures distributed over processors in a parallel network. Data dependencies among the components of a data field induce communication among the processors. We show how various transformations can make the communication more efficient. A methodology of deriving parallel programs from functional specifications is presented. As an example, we consider the parallelized Barnes-Hut algorithm, which uses a distributed tree structure with large data sets.;We then develop a category-theoretic formulation to capture the discrete topology of a data structure, and we show how the semantics of a distributed computation on a distributed structure can be expressed in terms of local computations on different components of the structure. Using the notions of covers and sheaves, we see how such local computations can be combined in various ways, and how multiple levels of locality and data replication across processors can be represented in this formalism. |