| The singular value decomposition(SVD)of a matrix and generalized SVD(GSVD)of a matrix pair are standard decompositions in the field of numerical linear algebra and scientific computation,and have been extensively used in scientific and engineering computing.Their effective and efficient computations are of high challenge.For the computation of SVD,we propose the harmonic and refined harmonic extraction versions of Jacobi-Davidson SVD(JDSVD)type methods for computing one or more interior singular triplets of a large matrix .At each outer iteration of these methods,a correction equation,i.e.,inner linear system,is solved approximately by using iterative methods.Accuracy of inner iterations critically affects the convergence and overall efficiency of the JDSVD methods.In this thesis,we make a convergence analysis of these two JDSVD methods,and prove that if all the correction equations are solved with low or modest accuracy during the outer iterations,each JDSVD method converges as if all the correction equations have been solved exactly.Based on the theory,we propose practical stopping criteria for inner iterations involved in these two methods.Numerical experiments confirm our theory and the effectiveness of the inexact algorithms.For the computation of the GSVD of a large matrix pair(,)where both and are of full column rank,the GSVD is commonly formulated as two mathematically equivalent generalized eigenvalue problems,so that a generalized eigensolver can be applied to one of them and the desired GSVD components are then recovered from the computed generalized eigenpairs.Our concern in this thesis is,in finite precision arithmetic,which formulation of the generalized eigenvalue problems is numerically preferable to compute the desired GSVD components more accurately.We make a detailed perturbation analysis on the two formulations,showing how to make a suitable choice between them.Numerical experiments illustrate the results obtained.For the computation of a partial GSVD of a large regular matrix pair,we propose a Cross-Product Free(CPF)Jacobi-Davidson(JD)type method,which is referred to as the CPF-JDGSVD method.By computing the thin QR factorizations of two certain matrices,the method solves the mathematically equivalent generalized eigenvalue problem of a certain cross-product matrix pair but avoids the possible accuracy loss of the computed generalized singular values and generalized singular vectors without explicitly forming the cross-product matrices.At each step,the searching subspaces are expanded by approximately solving a correction equation iteratively,called inner iterations.The extraction of approximate GSVD components with respect to given searching subspaces is called outer iterations.We make a convergence analysis on inner and outer iterations.Based on the results,practical stopping criteria for inner iterations are proposed.Numerical experiments illustrate the effectiveness of the CPF-JDGSVD algorithm. |