Font Size: a A A

Collective learning and cooperation between intelligent software agents: A study of artificial personality and behavior in autonomous agents playing the infinitely repeated prisoner's dilemma game

Posted on:1998-03-22Degree:D.ScType:Dissertation
University:The George Washington UniversityCandidate:Shebalin, Paul ValentineFull Text:PDF
GTID:1468390014978723Subject:Engineering
Abstract/Summary:
Competition is an inextricable part of our daily life. Why does cooperation routinely occur without coercion? One answer is that we are playing a game called the Prisoner's Dilemma. The Prisoner's Dilemma game poses a paradox--the rational player will never choose to cooperate. Yet when people play the game they frequently do cooperate. This was explicitly demonstrated by Rapoport's Prisoner's Dilemma study.; As part of his study, Rapoport developed a stochastic learning model to help explain the human behavior he observed in the Infinitely Repeated Prisoner's Dilemma (IRPD) game. That model has weaknesses, however, including its inability to autonomously adapt when the rules of the game change. This is unfortunate because an adaptive IRPD-game learning model would be of benefit to the study of cooperative behavior between autonomous agents such as people and organizations and in the design of intelligent software agents.; Bock's collective learning systems (CLS) theory provides an adaptative alternative to Rapoport's stochastic learning model. In this dissertation, we develop the personality-moderated collective learning (PMCL) model, build computer programs to simulate PMCL agents playing the IRPD game, and conduct Monte Carlo experiments. PMCL agents playing the IRPD game, and conduct Monte Carlo experiments. PMCL agents have temperament and attitude as personality factors. Our study confirms that collective learning allows diversity, consistency with Rapoport's study, and adaptability in intelligent software agents playing the IRPD game. Additionally, the study shows that small differences in PMCL agent personality can generate significant differences in IRPD-game behavior. The dissertation ends with a discussion on the applicability of PMCL for modeling human personality.; The importance of this study is that it presents a new way of modeling the problem of cooperation between autonomous agents. Significant contributions include: (1) extending Bock's CLS theory to the IRPD problem, (2) developing the PMCL model, a new formulation of stochastic learning that does not have the weaknesses of Rapoport's stochastic learning model, and (3) introducing the concepts, attitude and temperament, to CLS theory. Intelligent software agents can be constructed with PMCL-based mechanisms and can then be organized into systems of adaptably-cooperative intelligent software agents. Additionally, the effect of artificial personality on autonomous agent behavior can be computationally modeled and studied.
Keywords/Search Tags:Intelligent software agents, Prisoner's dilemma, Behavior, Personality, Collective learning, Autonomous, Cooperation, Model
Related items