Font Size: a A A

Information collection in stochastic optimization

Posted on:2012-09-07Degree:Ph.DType:Thesis
University:Princeton UniversityCandidate:Ryzhov, Ilya OlegovichFull Text:PDF
GTID:2450390011453950Subject:Applied Mathematics
Abstract/Summary:
Many practical stochastic optimization problems are characterized by a high degree of uncertainty about key problem parameters or exogenous processes. Even if the stochastic structure of the problem is modeled accurately, it is often difficult to compute exact solutions, giving rise to a wide variety of approximate methods (e.g. approximate dynamic programming). These approximate methods face a fundamental issue known as the exploration/exploitation dilemma. When we make a decision, we must strike a balance between exploitation, making decisions that seem to yield good outcomes based on existing estimates, and exploration, making decisions with uncertain outcomes, on the chance that they may turn out to be better than we think. The outcome of each decision may yield an immediate economic benefit, but it also provides new information that can be used to make better decisions in the future. The field of optimal learning studies formal mathematical models that capture fundamental aspects of information collection. In this thesis, we bridge the gap between optimal learning and stochastic optimization using what is known as the method of knowledge gradients. We develop new learning problems with complex optimization models, such as shortest-path problems on graphs and linear programs with imperfectly known cost coefficients. We apply techniques from optimal learning, based on standard models and assumptions about conjugate prior distributions of unknown quantities, variance structures, and fixed information budgets, to collect information efficiently in these problems.
Keywords/Search Tags:Information, Stochastic, Optimization
Related items