Font Size: a A A

Neural Learning Algorithms For Principal And Minor Components Analysis And Applications

Posted on:2001-11-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y S OuFull Text:PDF
GTID:1118360002451299Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Principal components are the directions in which the input data have the largest variances. Expressing data vectors in terms of the principal components is called Principal Component Analysis (PCA). On the Other hand, the minor components are the directions in which the data have the smallest variances. Minor Component Analysis (MCA), counterpart of PCA, and principal component analysis, are powerful statistical techniques for analyzing the covartance structure of a stochastic vector process. PCA and MCA have been widely used in many modem information processing fields, such as high resolution spectrum estimation, system identification, data compression, feature extraction, pattern recognition, digital communication, computer vision, and so on. In this dissertation we investigate various neural network models and corresponding learning algorithms for PCA and MCA. The emphasis is on extracting in parallel multiple eigen-components of a covariance matrix. The primary contributions and original ideas included in this dissertation are summarized below: We propose a learning algorithm for PCA based on the least-squares minimization, referred to as Robust Recursive Least Squares Algorithm (RRLSA). In contrast to all the previous linear PCA algorithms. the RRLSA does not introduce a back- propagation error to update the synaptic weight. It looks more like Hebb抯 rule which just updates the synaptic weight proportionally to the corresponding input-output product. In addition, the RRLSA introduces another multiplicative factor, also referred to as the leaky factor in literature, to update the weight vector. Therefore, the R.RLSA may be referred to as a leaky Hebb抯 rule which has a simple form. We establish the relation between Oja抯 rule and the least scjuares error criterion by using an optimal learning condition in Ojas rule. We introduce an unnormalized weight vector and show that all information needed for PCA can be completely represented by the unnormalized weight vector. Hence. the unnormalized weight vector contains more information than the corresponding normalized version. The simulation results show that the RRLSA not only overcomes the drawbacks of slower convergence and possible misadjustment encountered with the gradient-based Hebbian type algorithms, but is also robust to the error accumulation existing in the sequential PCA algorithms. By introducing a weighting matrix, we propose a Generalized Energy Function (GEF) to search for the optimum weights of a symmetrical linear neural networlc. From the GEF, several well-known learning algorithms for PCA or for principal subspace analysis (PSA), such as least mean square error reconstruction (LMSER) andprojection approximation subspace tracking (PAST), can be derived. More importantly, we obtain a Recursive Least Squares (R.LS) algorithm capable of extracting in parallel multiple principal components via the symmetrical linear network architecture. From the GEF we interpret in principle how the weighting matrix is to break a symmetrical subspace rule into a PCA rule. Simulations show the (3EF algorithm is more robust to the eigenspread of a covariance matrix than the well-known adaptive principal component extraction (APEX) algorithm. We propose a Weighted INformation Criterion (W1NC) to search for the optimal weights of a linear neural network. We analytically show that the optimum weights globally asymptotically converge to the principal components of a covariance m...
Keywords/Search Tags:Principal component analysis, minor component analysis, neural networks, generalized energy function, information criterion, weighted information criterion, parallel adaptive learning algorithm, stability and convergence analysis
PDF Full Text Request
Related items