Font Size: a A A

Data dependence testing in the presence of pointers and pointer-based data structures

Posted on:1999-08-12Degree:Ph.DType:Thesis
University:University of California, IrvineCandidate:Hummel, Joseph EdwardFull Text:PDF
GTID:2468390014969595Subject:Computer Science
Abstract/Summary:
Data structures are important components of computer software, since the proper choice of data structure can greatly impact the efficiency of the resulting application. In FORTRAN, where arrays serve as the primary data structure, optimizing compilers expend a great deal of effort performing array analysis in order to further improve efficiency. Pointer-based data structures pose a number of advantages over array-based ones, and thus are quite common in applications written using Ada, C, C++, FORTRAN90, and Java. However, pointer-based structures--which include linked-lists, trees, and more complex DAG and cyclic structures--also present new and difficult analysis problems for optimizing compilers. These problems have yet to be solved in a general and accurate manner, which often translates into a significant performance loss on modern computer architectures.;One of the fundamental analyses performed by optimizing compilers is that of data dependence analysis. The absence of data dependences enables a compiler to generate semantically-equivalent programs that run faster and use less space. This dissertation presents the first general approach for performing accurate data dependence testing in the presence of complex, pointer-based data structures. This work is motivated by the thesis that even complex data structures exhibit a good deal of independence, independence that can be exploited at all levels by an optimizing compiler--from coarse-grain, loop-level transformations to fine-grain, instruction-level optimizations.;Results derived from a prototype C compiler clearly support the thesis: linear and near-linear speedups were obtained on an 8-PE shared-memory multiprocessor for a number of important benchmarks from a range of application domains (scientific simulation, computational biology, computer-aided design, and graphics). Four-fold speedups were also obtained at the instruction level on a simulated VLIW architecture; such optimizations have a multiplicative effect when applied in conjunction with coarse-grain parallelization. These results were enabled solely by the direct application of the ideas presented in this dissertation.
Keywords/Search Tags:Data
Related items