The world is full of complex environments in which individuals must plan a series of choices to obtain some desired outcome. In these situations entire sequences of events, including one's future decisions, should be considered before taking an action. Backward induction provides a normative strategy for planning, in which one works backward, deterministically, from the end of a scenario. However, it often fails to account for human behavior. I propose an alternative account, Decision Field Theory-Planning, in which individuals plan future choices on the fly through repeated mental simulations. A key prediction of DFT-P is that payoff variability produces noisy simulations and reduces sensitivity to utility differences. In two multistage risky decision tree experiments I obtained this payoff variability effect, with choice proportions moving toward 0.50 as variability increased. I showed that DFT-P provides valuable insight into the strategies that people used to plan future choices and allocate cognitive resources across decision stages. |