Font Size: a A A

Hierarchical reinforcement learning and social cognition in cooperative multi-robot foraging

Posted on:2012-02-25Degree:Ph.DType:Thesis
University:Dartmouth CollegeCandidate:Sun, XueqingFull Text:PDF
GTID:2458390011452215Subject:Engineering
Abstract/Summary:
The study of cooperative multi-agent systems is an active research area that addresses the problem of how to coordinate individual agents in order to achieve an optimal behavioral strategy to task completion as a team. Because of the interactions among the agents, the main challenges of multi-agent cooperation lie in the increasing computational complexity with the number of agents and the size of the task space (world states and action choices). In a variety of application domains, reinforcement learning (RL), in which an agent learns based on reward or punishment from the environment, has become one of the important learning techniques in the fields of artificial intelligence.;In this dissertation, a biologically-inspired, socially-augmented reinforcement learning approach to multi-agent cooperation is presented to achieve both heterogeneous role emergence and task learning in a unified framework. The methodology involves state abstraction in a neural perception module, hierarchical Q-learning, and incorporation of social construct to reduce complexity and enhance learning efficiency. The problems addressed in this thesis work can be described mathematically as some variants of a Markov decision process (MDP), which are proven to have complexity ranging from P-complete for centralized MDP to non-deterministic exponential or NEXP for decentralized Partially Observable Markov decision process (Dec-POMDP). However, there is ample evidence in the natural world that high-functioning mammals learn to solve complex problems with ease, both individually and cooperatively. This ability to solve computationally intractable problems stems from both brain circuits for hierarchical representation of state, action spaces and learned policies, as well as constraints imposed by social cognition.;The primary contributions of this dissertation are exploring learning in foraging tasks based on state and action-space abstraction and hierarchical representation to reduce complexity and make the problem domain more tractable. In addition, social knowledge modeled after social behavior observed in intelligent mammals increases efficiency in policy searching by using roles, relationship and dominance hierarchies. Taken together, these concepts provide satisficing solutions to otherwise intractable agent-based learning problems. Analysis results bound the reduction in computational complexity and extensive simulation results show that theoretical bounds hold. Robotic demonstration is conducted to validate the learning framework in a realistic environment.
Keywords/Search Tags:Reinforcement learning, Social, Hierarchical
Related items