| A wide variety of stochastic control problems can be posed as Markov decision processes. However, determining an optimal control policy is intractable for many of these problems. The objective of this thesis is to develop tractable methods that can be used to find ap proximately optimal polices, and prove that the cost incurred by these policies is close to the optimal achievable cost. We present a simple way to find upper and lower bounds on the average cost incurred by a Markov decision process. Using these bounds, we develop a method for bounding the ratio between the cost achieved by a given policy and the optimal achievable cost. This method can be applied to Markov decision processes with general state spaces.; This approach is demonstrated on a variety of problems, including wireless power control, control of packet switches, state estimation in networked linear systems, and a stochastic version of the k-server problem. In each example, either finding a policy that minimizes average cost is intractable, or implementing such a policy is impractical. However, in each example we are able to find a simple, practically implementable policy and prove a bound on the ratio between the cost incurred by this policy and the optimal achievable cost. |