Optimal control is a very important field of study not only in theory but also in applications. In a multi-stage system, when the state at a stage of the system is derived from the state of the former stage, and disturbed by an uncertain variable, a multi-stage uncertain optimal control problem is proposed. An optimal control prob-lem for a multi-stage uncertain system is considered to optimize the expected value of an uncertain objective function. Based on Bellman's Principle of Optimality in dy-namic programming, recurrence equations for the problem are presented. By using the recurrence equations, the exact Bang-Bang optimal controls and the corresponding op-timal objective values for the optimal control problem with a linear objective function subject to an uncertain linear and quadratic system are obtained, respectively. |