In this thesis, a dynamic theory of learning, also called online learning in computer science, is presented as stochastic approximations of the regression function from reproducing kernel Hilbert spaces (RKHS). We show by probabilistic upper bounds that these algorithms may achieve the same convergence rates as "batch learning", and thus asymptotically reach the optimal rates in some senses. |