Font Size: a A A

Hierarchical reinforcement learning with function approximation for adaptive control

Posted on:2005-12-24Degree:Ph.DType:Dissertation
University:Case Western Reserve UniversityCandidate:Skelly, Margaret MaryFull Text:PDF
GTID:1458390008498155Subject:Engineering
Abstract/Summary:
This dissertation investigates the incorporation of function approximation and hierarchy into reinforcement learning for use in an adaptive control setting through empirical studies.; Reinforcement learning is an artificial intelligence technique whereby an agent discovers which actions lead to optimal task performance through interaction with its environment. Although reinforcement learning is usually employed to find optimal problem solutions in unchanging environments, a reinforcement learning agent can be modified to continually explore and adapt in a dynamic environment, carrying out a form of direct adaptive control. In the adaptive control setting, the reinforcement learning agent must be able to learn and adapt quickly enough to compensate for the dynamics of the environment. Since reinforcement learning is known to converge slowly to optimality in stationary examined as a means to accelerate reinforcement learning. Various levels of agents through the use of function approximation and hierarchical task decomposition. The effectiveness of this approach is tested in simulations of representative reinforcement learning tasks. The comparison of the learning and adaptation provides insight into the suitability of these techniques to accelerate learning and adaptation.; the reinforcement learning agent uses function approximation to store its learned information. The function approximation method chosen provides local generalization, which provides for a controlled diffusion of information throughout the task space. As a consequence, the experiments conducted with function by the amount of information diffusion, can accelerate learning in tasks where similar states call for similar actions.; Hierarchical task decomposition provides a means of representing a task as a set of related subtasks, which introduces modularity into the task's representation not possible in a monolithic representation. One effect of the hierarchy's modularity is to contain certain environment changes within the smaller space of a subtask. Therefore, the experiments comparing hierarchical and monolithic representations of a task demonstrate that the hierarchical representation can accelerate adaptation in response to certain isolated environment changes.
Keywords/Search Tags:Reinforcement learning, Function approximation, Adaptive control, Hierarchical, Task, Environment, Accelerate
Related items