Font Size: a A A

Debugging, Repair, and Synthesis of Task-Parallel Program

Posted on:2018-02-01Degree:Ph.DType:Thesis
University:Rice UniversityCandidate:Surendran, RishiFull Text:PDF
GTID:2478390020456121Subject:Computer Science
Abstract/Summary:
Parallelizing sequential programs to effectively utilize modern multicore architectures is a key challenge facing application developers and domain experts. Therefore, it is a need of the hour to create tools that aid programmers in developing correct and efficient parallel programs. In this thesis, we present algorithms for debugging, repairing, and synthesizing task-parallel programs that can provide a foundation for creating such tools. Our work focuses on task-parallel programs with both imperative async-finish parallelism and functional-style parallelism using futures.;First, we address the problem of detecting races in parallel programs with async, finish and future constructs. Existing dynamic determinacy race detectors for task-parallel programs are limited to programs with strict computation graphs in which a task can only wait for some subset of its descendant tasks to complete. In this thesis, we present the first known determinacy race detector for non-strict computation graphs generated using futures. The space and time complexity of our algorithm degenerate to those of the classical SP-bags algorithm when using only structured parallel constructs such as spawn-sync and async-finish. In the presence of point-to-point synchronization using futures, the complexity of the algorithm increases by a factor determined by the number of future task creation and get operations as well as the number of non-tree edges in the computation graph.;Next, we introduce a hybrid static+dynamic test-driven approach to repairing data races in task-parallel programs. Past solutions to the problem of repairing parallel programs have used static-only or dynamic-only approaches, both of which incur significant limitations in practice. Static approaches can guarantee soundness in many cases but are limited in precision when analyzing medium or large-scale software with accesses to pointer-based data structures in multiple procedures. Dynamic approaches are more precise, but their proposed repairs are limited to a single input and are not reflected back in the original source program. Our approach includes a novel coupling between static and dynamic analyses. First, we execute the program on a concrete test input and determine the set of data races for this input dynamically. Next, we compute a set of static "finish" placements that repairs these races and also respects the static scoping rules of the program while maximizing parallelism.;Finally, we introduce a novel approach to automatically synthesize task-parallel programs with futures from sequential programs through identification of pure method calls. Our approach is built on three new techniques to address the challenge of automatic parallelization via future synthesis: candidate future synthesis, parallelism benefit analysis, and threshold expression synthesis. In candidate future synthesis, our system annotates pure method calls as async expressions and synthesizes a parallel program with future objects and their type declarations that are more precise than those from past work. Next, the system performs a novel parallel benefit analysis to determine which async expressions may need to be executed sequentially due to overhead reasons, based on execution profile information collected from multiple test inputs. Finally, threshold expression synthesis uses the output from parallelism benefit analysis to synthesize predicate expressions that can be used to determine at runtime if a specific pure method call should be executed sequentially or in parallel. These algorithms have been implemented and evaluated on a range of benchmark programs. The evaluation establishes the effectiveness of our approach with respect to dynamic data race detection overhead, compile-time overhead, and precision and performance of the repaired and synthesized code.
Keywords/Search Tags:Parallel, Programs, Synthesis, Dynamic, Data
Related items