Many practical modeling and control problems are characterized by interactive decision-making in the face of large uncertainty. The focus of this thesis is the formulation and analysis of decision-making problems with these features. The concepts and viewpoints of stochastic learning theory and game theory lend themselves naturally to this end. In particular, it is assumed that in a world of large uncertainty, decentralized decision-makers use simple learning algorithms to update their decisions. A number of interesting yet tractable interactive environments are addressed. First, the behavior of myopic players using learning schemes in an abstract game is studied. If the payoff to each player is identical, it is shown that the payoff improves in an expected sense at every instant. A condition is given under which such behavior implies global optimality. Next, from a modeling viewpoint, new methods of interconnecting decision-makers, in both synchronous and sequential configurations, are introduced. This enables the construction of plausible models of larger systems by specifying local interactions. It is shown that synchronous models give rise to abstract games which can be analyzed in special cases. Finally, the decentralized control of a finite state Markov chain is posed as a generalized sequential model. A simple learning scheme is given for the decision-makers, one associated with each state, which ensures convergence to the optimal policy. An important property of ergodic Markov chains in this control problem is also derived. |