Font Size: a A A

Computational experience with linear optimization and related problems

Posted on:2011-09-28Degree:Ph.DType:Dissertation
University:University of Illinois at Urbana-ChampaignCandidate:Elble, Joseph MFull Text:PDF
GTID:1448390002455884Subject:Engineering
Abstract/Summary:
The computational aspects of the simplex algorithm are investigated, and high performance computing is used to enhance the computational performance of analogous algorithms for linear systems. It is well-known that the efficiency of mathematical optimization software is integrally linked to the preprocessing techniques employed by software. Existing preprocessing techniques are reviewed, and the relative cost and effectiveness of these techniques are explored. Scaling is the most common preconditioning technique utilized in linear optimization solvers. Existing techniques for obtaining scaling factors for linear systems are investigated, and it is discovered that, on average, no scaling technique outperforms the simplest technique, i.e., equilibration, despite the added complexity and computational cost. Furthermore, simply applying techniques that work well in practice for linear systems (i.e., solving Ax = b, where A is an n x n matrix with full-rank and b is an n-vector) is not a solution to the underlying problem of scaling a linear program. A small computational study depicts the effectiveness of the Orchard-Hays triangularization technique in preordering a simplex basis for factorization. Pricing and pivoting in the simplex algorithm involve selecting an entering and exiting variable at each iteration. Several existing pricing procedures are reviewed, and new procedures are proposed. These procedures are capable of reducing the number of simplex iterations. By prohibiting specific variables from repeatedly re-entering the basis, degenerate cycles can be avoided. This phenomenon is computationally analyzed using several modern test problems.;High performance computing is investigated for several related linear problems. A composite-Jacobi binormalization algorithm is developed for the graphics processing unit. It is shown that this algorithm achieves a six-fold improvement in performance relative to the corresponding CPU implementation. The graphics processing unit is used to solve large linear systems derived from partial differential equations. Well-known indirect methods are studied, and the results demonstrate that a graphics processing unit implementation of CGNR (conjugate gradient normal residual) is capable of out-performing a state-of-the-art parallel implementation of a specialized algorithm designed for a 16-node Linux cluster. The computational results demonstrate that the graphics processing unit offers a low-cost and high-performance computing solution for solving large-scale partial differential equations.
Keywords/Search Tags:Computational, Graphics processing unit, Linear, Performance, Computing, Algorithm, Optimization, Simplex
Related items