Font Size: a A A

Theories And Applications Of MCA Neural Networks

Posted on:2007-02-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:D Z PengFull Text:PDF
GTID:1118360212475535Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Extraction of minor component plays an important role in beamforming, frequency estimation and curve/surface fitting. As an important statistical tool, minor component analysis (MCA) has been widely applied in the fields of signal processing and data analysis. Neural networks can be used to adaptively extract minor component from high-dimensional input signals. Compared with traditional matrix algebraic approaches, the neural networks method has a lower computational complexity.Convergence of MCA neural networks is essential to practical applications. The dynamical behaviors of MCA neural networks have attracted worldwide attention in recent years. Many convergence results about MCA neural networks were derived via the traditional deterministic continuous time (DCT) method in the past years. However, DCT method requires many restrictive conditions that are usually not satisfied in practical applications. Recently, a deterministic discrete time (DDT) method has been proposed to analyze the dynamics of feedforward neural networks. DDT method does not require restrictive conditions as that of DCT method and is a more reasonable analysis one. This thesis mainly focuses on the convergence analysis of MCA neural networks with DDT method. In addition, the convergence speeds of MCA learning algorithms and modifications to some existing MCA learning algorithms are discussed in detail. The main contributions are as follows:1. Convergence analysis for MCA learning algorithms with constant learning rates.According to stochastic approximation theory, when deterministic continuous time (DCT) method is used to analyze convergence of MCA learning algorithms, the learning rate is required to approach to zero. However, the learning rate is usually taken as a constant in many practical applications. On the other hand, deterministic discrete time (DDT) method allows the learning rate to be a constant. Therefore, DDT method is a more reasonable method for convergence analysis. In this thesis, dynamics of some important MCA learning algorithms with constant learning rates are analyzed via DDT method and some sufficient conditions are obtained to guarantee the convergence.2. Analysis for convergence speeds of MCA learning algorithms.Fast convergence of MCA learning algorithms is important to practical applications. In this thesis, the factors that affect the convergence speeds of MCA learning algorithms are introduced. Comparison of convergence speeds of different MCA learning algorithms is carried out. Some guidelines for selecting initial weight vectors are provided to speed up the convergence.3. Modifications to some existing MCA learning algorithms.There exists a norm divergence problem in some existing MCA learning algorithms. In this thesis, by introducing a variable learning rate and normalization step, some modifications to existing MCA learning algorithms are proposed to guarantee that the weight vector norm can stably converge to a constant.4. Generalized MCA learning algorithm.In this thesis, a generalized MCA learning algorithm is proposed and analyzed. Many other MCA learning algorithms can be considered as instances of the generalized one.5. Sequential MCA learning algorithm.In some practical applications, extracting multiple minor components is necessary. In this thesis, a sequential MCA learning algorithm is proposed to extract multiple minor components from input signals. It is proven via DDT method that if the learning rate satisfies some mild conditions, the proposed sequential algorithm is globally convergent.
Keywords/Search Tags:Neural Networks, Minor Component Analysis, Eigenvector, Eigenvalue, Deterministic Discrete Time System
PDF Full Text Request
Related items