Font Size: a A A

Learning in chaotic recurrent neural networks

Posted on:2010-05-10Degree:Ph.DType:Dissertation
University:Columbia UniversityCandidate:Sussillo, David CFull Text:PDF
GTID:1448390002976953Subject:Biology
Abstract/Summary:
Training recurrent neural networks (RNNs) is a long-standing open problem both in theoretical neuroscience and machine learning. In particular, training chaotic RNNs was previously thought to be impossible. While some traditional methods for training RNNs exist, they are generally thought of as weak and typically fail on anything but the simplest of problems and smallest networks. We review previous methods such as gradient descent approaches and their problems, and we also review more recent approaches such as the Echostate Network and related ideas. We show that chaotic RNNs can be trained to generate multiple patterns. Further, we explain a novel supervised learning paradigm, which we call FORCE learning, that accomplishes the training. The network architectures we analyze, on the one extreme, include training only the input weights to a readout unit that has strong feedback to the network, and on the other extreme, involve generic learning of all synapses within the RNN. We present these models as potential networks for motor pattern generation that are able to learn multiple, high-dimensional patterns while coping with the complexities of a recurrent network that may have spontaneous, ongoing, and complex dynamics. We show an example of a single RNN that can generate the aperiodic dynamics of all 95 joint angles for both human walking and running motions captured via motion capture technology. Finally, we apply the learning techniques we developed for chaotic RNNs to a novel, unsupervised method for extracting predictable signals out of high-dimensional time series data, if such predictable signals exists.
Keywords/Search Tags:Network, Recurrent, Chaotic, Rnns, Training
Related items