Font Size: a A A

New dynamic programming approaches to stochastic optimal control problems in chemical engineering

Posted on:2006-10-23Degree:Ph.DType:Dissertation
University:University of Toronto (Canada)Candidate:Thompson, Adrian MartellFull Text:PDF
GTID:1450390005494159Subject:Engineering
Abstract/Summary:
Optimal control of chemical processes in the presence of stochastic model uncertainty is addressed. Contributions are made in two areas of process control interest: dual adaptive control (DAC) and robust optimal control (ROC). These are synergistic in that DAC involves sequences of stochastic ROC problems. In chemical engineering, these problems typically have continuous state and control spaces, and are subject to a curse of dimensionality (COD) within the stochastic dynamic programming (SDP) framework. The main novelty presented here is the method by which this COD is mitigated.; Existing methods to mitigate the COD include state space aggregation, function approximation (FA), or exploitation of problem structure, e.g. system linearity. The first two yield problems of reduced but still large complexity. The third is problem specific and does not generalize well to non-linear, non-convex or non-Gaussian structures. Here, two new algorithms are developed that mitigate the COD without these simplifications, with only minimal restrictions imposed on problem structure.; The first, a Monte Carlo extension of iterative dynamic programming (IDP), reduces discretization requirements by restricting the control policy to the dominant portion of the state space. A proof of strong probabilistic convergence of IDP is derived, and is shown to extend to the new stochastic IDP (SIDP) algorithm. Simulations demonstrate that SIDP can provide significant COD mitigation in DAC applications, relative to the standard SDP approach. Specifically, a 96% computation reduction, 92% storage reduction and less than 2% accuracy loss were simultaneously achieved using SIDP.; The second algorithm, a policy iteration (PI) variant employing Nystrom's discretization method, allows computation of continuous stochastic ROC policies without quadrature, function approximation, interpolation, or Monte Carlo methods. Lipschitz continuity assumptions allow reformulation of the original problem into an equivalent finite state problem solvable in a Luus-Jaakola global optimization framework. This enables exponential computation reductions relative to standard PI. Simulations, involving stochastic ROC of a nonlinear reactor, exhibited a 99.9% reduction in computation with identical accuracy. Additionally, the average performance of the policy obtained was 58.2% better than the certainty equivalence policy.
Keywords/Search Tags:Stochastic, Dynamic programming, Chemical, ROC, Problem, COD, New, Policy
Related items