Font Size: a A A

Persistency Of Excitation And Performance Analysis Of Deterministic Learning Algorithms

Posted on:2013-08-20Degree:MasterType:Thesis
Country:ChinaCandidate:C Z YuanFull Text:PDF
GTID:2248330374976334Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Recently, by using the localized radial basis function neural networks (RBFNNs), adeterministic learning theory was proposed for nonlinear system identifcation and dy-namical pattern recognition. In this thesis, we consider the following two issues of thedeterministic learning theory:(i) evaluating the performance of deterministic learningalgorithms for both continuous-time systems and discrete-time systems, and the efectsof noises (including system noise and measurement noise) upon the learning performance;(ii) designing new learning algorithms for identifcation of sampled-data nonlinear systemsand for recognition of temporal data sequence.For the frst issue, by utilizing classical tools in system identifcation and adaptivecontrol, such as “uniform complete observability”(UCO) and “output injection” tech-niques, the explicit solution of a class of linear time-varying (LTV) systems is obtained,and the relationship between the persistent excitation (PE) level and the learning per-formance is established. On such stage, it is shown that:(i) the learning speed increaseswith the PE level;(ii) there exists an optimal learning speed;(iii) the learning accuracyincreases with the PE level, in particular, when the PE level is large enough, locally ac-curate learning can be achieved to the desired accuracy, whereas low PE level may resultin the deterioration of the learning performance. These results not only partially revealthe natures of deterministic learning, but also provide a systematic method for obtainingexplicit bounds of the convergence rate and residual errors of a class of LTV systems fre-quently arising in adaptive identifcation. Additionally, the efects of two essential noisesarising in implementations, i.e., the system noise and the measurement noise, are furtherinvestigated. It is shown that noises have little efects on the learning speed, whereasthe measurement noise can cause severer efects on the learning accuracy when comparedwith the system noise. Moreover, it is presented that the system noise can be used toimprove the generalization ability of the RBF networks. The study of noises gives theguidance for deterministic learning in practical design.For the second issue, by introducing a state transformation and utilizing classicaltools for stability analysis of discrete-time systems, a new neural network (NN) learninglaw for deterministic learning of sampled-data nonlinear systems is proposed. This newlearning law successfully avoids using two outputs of the systems, i.e., an a priori output and an a posteriori output, which are typically employed in many previous literatures.Furthermore, this result facilitates the stability analysis of the overall systems and theperformance analysis of the NN parameter convergence, i.e., the learning performance.Besides theoretical analysis, a practical application example on the Moore-Greitzermodel, a well-known axial fow compressor model, is included in this thesis to illustratethe efectiveness of partial proposed results.
Keywords/Search Tags:Persistent excitation, deterministic learning, linear time-varying (LTV)systems, sampled-data systems, adaptive control, system identifcation, radial basis func-tion (RBF) neural networks
PDF Full Text Request
Related items