Font Size: a A A

Maximum likelihood inverse reinforcement learning

Posted on:2015-07-15Degree:Ph.DType:Dissertation
University:Rutgers The State University of New Jersey - New BrunswickCandidate:Vroman, Monica CFull Text:PDF
GTID:1470390020452379Subject:Computer Science
Abstract/Summary:
Learning desirable behavior from a limited number of demonstrations, also known as inverse reinforcement learning, is a challenging task in machine learning. I apply maximum likelihood estimation to the problem of inverse reinforcement learning, and show that it quickly and successfully identifies the unknown reward function from traces of optimal or near-optimal behavior, under the assumption that the reward function is a linear function of a known set of features. I extend this approach to cover reward functions that are a generalized function of the features, and show that the generalized inverse reinforcement learning approach is a competitive alternative to existing approaches covering the same class of functions, while at the same time, being able to learn the right rewards in cases that have not been covered before.;I then apply these tools to the problem of learning from (unlabeled) demonstration trajectories of behavior generated by varying "intentions'' or objectives. I derive an EM approach that clusters observed trajectories by inferring the objectives for each cluster using any of several possible IRL methods, and then uses the constructed clusters to quickly identify the intent of a trajectory.;I present an application of maximum likelihood inverse reinforcement learning to the problem of training an artificial agent to follow verbal instructions representing high-level tasks using a set of instructions paired with demonstration traces of appropriate behavior.
Keywords/Search Tags:Inverse reinforcement learning, Maximum likelihood, Behavior
Related items