Markov Decision Process (MDP) Toolbox

Written by Kevin Murphy, 1999.

This toolbox supports value and policy iteration for discrete, tabular MDPs, and includes some grid-world examples from the textbooks by Sutton and Barto, and Russell and Norvig.