In this thesis we investigate methods for speeding up automatic control Learning and Semi-Markov Decision Processes (SMDPs). We introduce the use We provide an approach for processing previously solved problems to extract these policies. We also contribute a method for using supplied or extracted policies to guide and speed up problem, solving of new problems. We treat extracting policies as a supervised learning task and introduce the Lumberjack algorithm that extracts repeated sub-structure within a decision tree. We to increase problem solving speed on new problems. TTree solves SMDPs by using that is able to ignore irrelevant or harmful subregions within a supplied... |