Font Size: a A A

Preconditioning sparse matrices for computing eigenvalues and solving linear systems of equations

Posted on:2002-08-21Degree:Ph.DType:Dissertation
University:University of California, BerkeleyCandidate:Chen, Tzu-YiFull Text:PDF
GTID:1460390011990774Subject:Computer Science
Abstract/Summary:
In this dissertation we look at the role of preconditioners in finding the eigenvalues of sparse matrices and in solving sparse systems of linear equations.; The eigenvalues of a matrix A are the λ such that Ax = λx, where x is referred to as the (right) eigenvector corresponding to λ. Numerical algorithms that compute the eigenvalues of a nonsymmetric matrix A typically have backward errors proportional to the norm of A, so it can be useful to precondition an n x n matrix A in such a way that its norm is reduced and its eigenvalues are preserved. We focus on balancing A, in other words finding a diagonal matrix D such that for 1 ≤ in the norm of row i and column i of DAD−1 are the same. Interestingly, there are many relationships between balancing in certain vector norms and minimizing varied matrix norms. For example, in [143] Osborne shows balancing a matrix in the 2-norm also minimizes the Froebenius norm of DAD−1 over all D up to scalar multiples. We summarize results known about balancing in other norms before defining balancing in a weighted norm and proving that this minimizes the 2-norm for nonnegative, irreducible A.; We use our results on balancing in a weighted norm to justify a set of novel Krylov-based balancing algorithms which approximate weighted balancing and which never explicitly access individual entries of A. By using only matrix-vector (Ax), and sometimes matrix-transpose-vector (AT x), multiplications to access A, these new algorithms can be used with eigensolvers that similarly assume only that a subroutine for computing Ax (and possibly AT x) is available. We then show that for matrices from our test suite, these Krylov-based balancing algorithms do, in fact, often improve the accuracy to which eigenvalues are computed by dense or sparse eigensolvers. For our test matrices, Krylov-based balancing improved the accuracy of eigenvalues computed by sparse eigensolvers by up to 10 decimal places. In addition, Krylov-based balancing can also improve the condition number of eigenvalues, hence giving better computed error bounds.; We begin by discussing preconditioners for direct solvers, starting with several algorithms for reordering the rows and columns of A prior to factoring it. We present data comparing the results of decomposing matrices with a nonsymmetric permutation to results from using a symmetric permutation. For one matrix the size of the largest block found with a nonsymmetric permutation is a tenth of the size of the largest block found with a symmetric permutation, which can greatly reduce the subsequent factorization time.; Focussing on a specific algorithm for reordering A to reduce fill, we then describe our design and implementation of a threaded column approximate minimum degree algorithm.; Finally we turn to incomplete LU (ILU) factorizations, a family of preconditioners often used with iterative solvers. We propose a modification to a standard ILU scheme and show that it makes better use of the memory the user has available, leading to a greater likelihood of convergence for preconditioned GMRES(50), the iterative solver used in our studies. (Abstract shortened by UMI.)...
Keywords/Search Tags:Eigenvalues, Sparse, Matrices, Balancing
Related items