Font Size: a A A

Three essays on strategic behavior

Posted on:2003-12-26Degree:Ph.DType:Dissertation
University:The University of ArizonaCandidate:Swarthout, James ToddFull Text:PDF
GTID:1468390011490071Subject:Economic theory
Abstract/Summary:
Each chapter of this dissertation focuses on a different aspect of strategic behavior. The first chapter presents research in which humans play against a computer decision maker that follows either a reinforcement learning algorithm or an Experience Weighted Attraction algorithm. The algorithms are more sensitive than humans to exploitable opponent play. Further, learning algorithms respond to calculated opportunities systematically; however, the magnitudes of these responses are too weak to improve the algorithm's payoffs. Additionally, humans and current models of their behavior differ in that humans do not adjust payoff assessments by smooth transition functions but when humans do detect exploitable play they are more likely to choose the best response to this belief.;The second chapter reports research designed to directly reveal the information used by subjects in a game. Human play is often classified as adhering to reinforcement learning or belief learning. This is typically due to using subjects' observed action choices to estimate the learning models' parameters. We use a different, more direct approach: an experiment in which subjects choose which kind of information they see---either the information required for reinforcement learning, or the information required for belief learning. Results suggest that while neither kind of information is chosen exclusively, subjects most often choose information that is consistent with belief learning and inconsistent with reinforcement learning.;The third chapter discusses the Groves-Ledyard mechanism. In economics we typically rely on continuous analysis, however doing so may not lead to an accurate assessment of a discrete environment. The Groves-Ledyard mechanism is such a case that demonstrates a drastic divergence of results between continuous and discrete analysis. This chapter shows that given quasi-linear preferences, a discrete strategy space will not necessarily yield a single Pareto optimal Nash equilibrium, but typically many Nash equilibria, not all of which are necessarily Pareto optimal. Further, the value of the mechanism's single free parameter determines the number of Nash equilibria and the proportion of Pareto optimal Nash equilibria.
Keywords/Search Tags:Pareto optimal, Nash equilibria, Chapter, Reinforcement learning
Related items