Font Size: a A A

A computational neuroscience model with application to robot perceptual learning

Posted on:2008-05-16Degree:Ph.DType:Dissertation
University:Vanderbilt UniversityCandidate:Tugcu, MertFull Text:PDF
GTID:1448390005456829Subject:Engineering
Abstract/Summary:
In robotics, one important objective is the ability to teach the robot new skills and have it reason about the current tasks at hand without explicit programming. Indeed, this idea is central to open-ended development, developmental robotics and autonomous mental development. One approach to this issue is to have the robot learn from its own past experience, which would help the robot adapt to changing environments. However, in a learning process, a critical issue to both robots and biological creatures is efficient use of the limited resources available for survival. A robot, operating in a complex, unstructured environment, will encounter many percepts and typically most of them are not relevant to the current task. This suggests the need for a capability to focus attention on the smaller number of items that are relevant to the task. Thus, prefrontal cortex working memory models may be a good fit for learning to associate perception and action, and perhaps other concepts as well, in order to perform a task.; Many of the systems in the literature have only crude perceptual capabilities and as a result, the environments are usually very simplified by modifications, such as by using artificial percepts. Such systems may fail in complex, uncontrolled environments, especially under changing lighting conditions. Thus, successful task execution strongly depends on a reliable perceptual system in these types of environments.; In this work, a novel implementation of a perceptual system, which operates on an extremely high dimensional feature space, is combined with a biologically inspired working memory model. The perceptual system does not rely on any parametric techniques (i.e., computing eigenvectors, covariance matrices, etc.) and the computational cost does not depend strongly on the number of dimensions. Only vision is used, as the main sensory input for the system. The resulting system initially learns basic behaviors and skills, which in turn, are used to learn more complex behaviors. The success of the system is demonstrated with a vision guided navigation task in a complex, noisy, and unmodified environment.
Keywords/Search Tags:Robot, Perceptual, System, Task, Complex
Related items