Font Size: a A A

Multi-Armed Bandits with Applications to Markov Decision Processes and Scheduling Problems

Posted on:2015-05-01Degree:Ph.DType:Dissertation
University:State University of New York at Stony BrookCandidate:Muqattash, Isa MFull Text:PDF
GTID:1478390017497774Subject:Applied Mathematics
Abstract/Summary:
The focus of this work is on practical applications of stochastic multi-armed bandits (MABs) in two distinctive settings.;First, we develop and present REGA, a novel adaptive sampling-based algorithm for control of finite-horizon Markov decision processes (MDPs) with very large state spaces and small action spaces. We apply a variant of the epsilon-greedy multi-armed bandit algorithm to each stage of the MDP in a recursive manner, thus computing an estimation of the "reward-to-go" value at each stage of the MDP. We provide a finite-time analysis of REGA. In particular, we provide a bound on the probability that the approximation error exceeds a given threshold, where the bound is given in terms of the number of samples collected at each stage of the MDP. We empirically compare REGA against other sampling-based algorithms and find that our algorithm is competitive. We discuss measures to aid against the curse of dimensionality due to the backwards induction nature of REGA, necessary when the MDP horizon is large.;Second, we introduce e-Discovery, a topic of extreme significance to the legal industry, which pertains to the ability of sifting through large volumes of data in order to identify the "needle in the haystack" documents relevant to a lawsuit or investigation. Surprisingly, the topic has not been explicitly investigated in academia. Looking at the problem from a scheduling perspective, we highlight the main properties and challenges pertaining to this topic and outline a formal model for the problem. We examine an approach based on related work from the field of scheduling theory and provide simulation results that demonstrate the performance of our approach against a very large data set. We also provide an approach based on list-scheduling that incorporates a side multi-armed bandit in lieu of standard heuristics. Necessarily, we propose the first MAB algorithm that accounts for both sleeping bandits and bandits with history. The empirical results are encouraging.;Surveys of multi-armed bandits as well as scheduling theory are included. Many new and known open problems are proposed and/or documented.
Keywords/Search Tags:Bandits, Scheduling, REGA, MDP
Related items