Font Size: a A A

Numerical Analysis For Two Classes Of Matrix Optimization Problems

Posted on:2009-10-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:W G WangFull Text:PDF
GTID:1100360245988177Subject:Detection and processing of marine information
Abstract/Summary:PDF Full Text Request
In this thesis numerical analysis for two classes of matrix optimization prob-lems in data processing are investigated. Many problems in applied sciences canbe cast in the framework of a regression problem. Ordinary regression models areusually optimization problem. Multivariate regressions are almost constrainedleast squares problems. Canonical correlation analysis (CCA) and generaliza-tions of CCA are almost orthogonal constrained optimization problem.In Chapter 1 canonical correlation analysis (CCA) and its generalizationsare introduced and partial least square and parallel factor analysis (PARAFAC)are considered. The overview of our results is given.Chapter 2 deals with perturbation analysis of matrix decompositions. Ma-trix decompositions play an important role in solving linear equations, eigen-value problems and linear least squares problems. In order to know whetherthe computed solution is good, the backward error and condition number shouldbe considered. Backward error re?ects the stability of algorithm and conditionnumber studies the e?ect on the solution of perturbations in the input data. Therelationship can be rephrased byNormwise and componentwise condition numbers of matrix decompositions havebeen investigated by many authors. Gohberg defines the mixed and componen-twise condition numbers. In this chapter, using a new unified approach (i.e.,choosing appropriate the column of the two Kronecker product), the first ordernormwise and componentwise perturbation expressions for LU, Cholesky andQR decompositions are derived. The mixed and componentwise condition num-bers are defined. The explicit expressions of mixed and componentwise condition numbers are derived for these matrix decompositions, which improve the knownresults partly.Chapter 3 concerns itself on the partial least squares (PLS). PLS is gainingmore and more interest because it can eliminate the collinearity of the variables.Eld′en clearly exhibits the mathematically equivalence between PLS and LBD,thus the PLS method based on LBD may be unstable. Some new properties anda new stable algorithm are obtained in this chapter. Moreover, the updating anddowndating algorithms of LBD are given.Chapter 4 we consider polar decomposition and generalized polar decompo-sition which are fundamental tools in data processing. This chapter consists oftwo parts.First, the polar decomposition and generalized polar decomposition are stud-ied. The approximation theorem proved by Sun and Chen is extended fromFrobenius norm to any unitarily invariant norm. A new explicit representationof the subunitary polar factor is obtained. By using this new expression, wederive a perturbation bound for the subunitary polar factor for any unitarilyinvariant norm. Finally, numerical computation is discussed.The second part is a continuation and improvement of results by Laszkiewiczand Zietak ( BIT, 46 (2006) pp. 345-366), studying perturbation analysis forpolar decomposition. Some basic properties of best approximation subunitarymatrices are investigated in details. The perturbation bounds of polar factor arealso derived.In Chapter 5 we consider an application of PARAFAC. DNA binding be-havior is studied. PARAFAC is a convincing method for studying the interactionof complexes with DNA. Using the PARAFAC, the equilibrium spectra and con-centrations of EB-DNA and EB are directly obtained.
Keywords/Search Tags:Multivariate regression, canonical correlation analysis (CCA), partial least squares (PLS), parallel factor analysis (PARAFAC), ma-trix decomposition, polar decomposition, generalized polar decompo-sition
PDF Full Text Request
Related items