Intelligence 2E

foundations of computational agents

Value iteration is a method of computing an optimal policy for an MDP and its value.

Value iteration starts at the “end” and then works backward, refining an estimate of either ${Q}^{*}$ or ${V}^{*}$. There is really no end, so it uses an arbitrary end point. Let ${V}_{k}$ be the value function assuming there are $k$ stages to go, and let ${Q}_{k}$ be the $Q$-function assuming there are $k$ stages to go. These can be defined recursively. Value iteration starts with an arbitrary function ${V}_{0}$. For subsequent stages, it uses the following equations to get the functions for $k+1$ stages to go from the functions for $k$ stages to go

${Q}_{k+1}(s,a)$ | $=R(s,a)+\gamma *{\displaystyle \sum _{{s}^{\prime}}}P({s}^{\prime}\mid s,a)*{V}_{k}({s}^{\prime})$ | ||

${V}_{k}(s)$ | $=\underset{a}{\mathrm{max}}{Q}_{k}(s,a)$ |

It can either save the $V[S]$ array or the $Q[S,A]$ array. Saving the $V$ array results in less storage, but it is more difficult to determine an optimal action, and one more iteration is needed to determine which action results in the greatest value.

Figure 9.16 shows the value iteration algorithm when the $V$ array is stored. This procedure converges no matter what the initial value function ${V}_{0}$ is. An initial value function that approximates ${V}^{*}$ converges quicker than one that does not. The basis for many abstraction techniques for MDPs is to use some heuristic method to approximate ${V}^{*}$ and to use this as an initial seed for value iteration.

Consider the two-state MDP of Example 9.27 with
discount ${\gamma}{\mathrm{=}}{\mathrm{0.8}}$. We write the value function as
${\mathrm{[}}{h}{\mathit{}}{e}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{t}{\mathit{}}{h}{\mathit{}}{y}{\mathit{}}{\mathrm{\_}}{\mathit{}}{v}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{u}{\mathit{}}{e}{\mathrm{,}}{s}{\mathit{}}{i}{\mathit{}}{c}{\mathit{}}{k}{\mathit{}}{\mathrm{\_}}{\mathit{}}{v}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{u}{\mathit{}}{e}{\mathrm{]}}$, and the *Q*-function as
${\mathrm{[}}{\mathrm{[}}{h}{\mathit{}}{e}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{t}{\mathit{}}{h}{\mathit{}}{y}{\mathit{}}{\mathrm{\_}}{\mathit{}}{r}{\mathit{}}{e}{\mathit{}}{l}{\mathit{}}{a}{\mathit{}}{x}{\mathrm{,}}{h}{\mathit{}}{e}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{t}{\mathit{}}{h}{\mathit{}}{y}{\mathit{}}{\mathrm{\_}}{\mathit{}}{p}{\mathit{}}{a}{\mathit{}}{r}{\mathit{}}{t}{\mathit{}}{y}{\mathrm{]}}{\mathrm{,}}{\mathrm{[}}{s}{\mathit{}}{i}{\mathit{}}{c}{\mathit{}}{k}{\mathit{}}{\mathrm{\_}}{\mathit{}}{r}{\mathit{}}{e}{\mathit{}}{l}{\mathit{}}{a}{\mathit{}}{x}{\mathrm{,}}{s}{\mathit{}}{i}{\mathit{}}{c}{\mathit{}}{k}{\mathit{}}{\mathrm{\_}}{\mathit{}}{p}{\mathit{}}{a}{\mathit{}}{r}{\mathit{}}{t}{\mathit{}}{y}{\mathrm{]}}{\mathrm{]}}$. Suppose initially the value function is ${\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{0}}{\mathrm{]}}$. The
next *Q*-value is ${\mathrm{[}}{\mathrm{[}}{\mathrm{7}}{\mathrm{,}}{\mathrm{10}}{\mathrm{]}}{\mathrm{,}}{\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{2}}{\mathrm{]}}{\mathrm{]}}$, so the next value
function is ${\mathrm{[}}{\mathrm{10}}{\mathrm{,}}{\mathrm{2}}{\mathrm{]}}$ (obtained by Sam partying). The next *Q*-value is
then

${S}{}{t}{}{a}{}{t}{}{e}$ | ${A}{}{c}{}{t}{}{i}{}{o}{}{n}$ | Value | ||
---|---|---|---|---|

${h}{}{e}{}{a}{}{l}{}{t}{}{h}{}{y}$ | ${r}{}{e}{}{l}{}{a}{}{x}$ | 7+0.8*(0.95*10+0.05*2) | = | 14.68 |

${h}{}{e}{}{a}{}{l}{}{t}{}{h}{}{y}$ | ${p}{}{a}{}{r}{}{t}{}{y}$ | 10+0.8*(0.7*10+0.3*2) | = | 16.08 |

${s}{}{i}{}{c}{}{k}$ | ${r}{}{e}{}{l}{}{a}{}{x}$ | 0+0.8*(0.5*10+0.5*2) | = | 4.8 |

${s}{}{i}{}{c}{}{k}$ | ${p}{}{a}{}{r}{}{t}{}{y}$ | 2+0.8*(0.1*10+0.9*2) | = | 4.24 |

So the next value function is ${\mathrm{[}}{\mathrm{16.08}}{\mathrm{,}}{\mathrm{4.8}}{\mathrm{]}}$. After 1000 iterations, the value function is ${\mathrm{[}}{\mathrm{35.71}}{\mathrm{,}}{\mathrm{23.81}}{\mathrm{]}}$. So the ${Q}$ function is ${\mathrm{[}}{\mathrm{[}}{\mathrm{35.10}}{\mathrm{,}}{\mathrm{35.71}}{\mathrm{]}}{\mathrm{,}}{\mathrm{[}}{\mathrm{23.81}}{\mathrm{,}}{\mathrm{22.0}}{\mathrm{]}}{\mathrm{]}}$. Therefore, the optimal policy is to party when healthy and relax when sick.

Consider the nine squares around the ${\mathrm{+}}{\mathrm{10}}$ reward of Example 9.28. The discount is ${\gamma}{\mathrm{=}}{\mathrm{0.9}}$. Suppose the algorithm starts with ${{V}}_{{\mathrm{0}}}{\mathit{}}{\mathrm{[}}{s}{\mathrm{]}}{\mathrm{=}}{\mathrm{0}}$ for all states ${s}$.

The values of ${{V}}_{{\mathrm{1}}}$, ${{V}}_{{\mathrm{2}}}$, and ${{V}}_{{\mathrm{3}}}$ (to one decimal point) for these nine cells are

0 | 0 | ${-}{0.1}$ |

0 | 10 | ${-}{0.1}$ |

0 | 0 | ${-}{0.1}$ |

${{V}}_{{1}}$ |

0 | ${6.3}$ | ${-}{0.1}$ |

${6.3}$ | ${9.8}$ | ${6.2}$ |

0 | ${6.3}$ | ${-}{0.1}$ |

${{V}}_{{2}}$ |

${4.5}$ | ${6.2}$ | ${4.4}$ |

${6.2}$ | ${9.7}$ | ${6.6}$ |

${4.5}$ | ${6.1}$ | ${4.4}$ |

${{V}}_{{3}}$ |

After the first step of value iteration (in ${{V}}_{{\mathrm{1}}}$) the nodes get their immediate expected reward. The center node in this figure is the ${\mathrm{+}}{\mathrm{10}}$ reward state. The right nodes have a value of ${\mathrm{-}}{\mathrm{0.1}}$, with the optimal actions being up, left, and down; each of these has a ${\mathrm{0.1}}$ chance of crashing into the wall for an immediate expected reward of ${\mathrm{-}}{\mathrm{1}}$.

${{V}}_{{2}}$ are the values after the second step of value iteration. Consider the node that is immediately to the left of the ${\mathrm{+}}{\mathrm{10}}$ rewarding state. Its optimal value is to go to the right; it has a 0.7 chance of getting a reward of 10 in the following state, so that is worth 9 (10 times the discount of ${\mathrm{0.9}}$) to it now. The expected reward for the other possible resulting states is ${\mathrm{0}}$. Thus, the value of this state is ${\mathrm{0.7}}{\mathrm{*}}{\mathrm{9}}{\mathrm{=}}{\mathrm{6.3}}$.

Consider the node immediately to the right of the ${\mathrm{+}}{\mathrm{10}}$ rewarding state after the second step of value iteration. The agent’s optimal action in this state is to go left. The value of this state is

$$\begin{array}{cccccc}& {\text{\mathit{P}\mathit{r}\mathit{o}\mathit{b}}}\hfill & {\text{\mathit{R}\mathit{e}\mathit{w}\mathit{a}\mathit{r}\mathit{d}}}\hfill & & {\mathit{\text{Future Value}}}\hfill & \\ & {0.7}{*}{(}\hfill & {0}\hfill & \hfill {+}\hfill & {0.9}{*}{10}{)}\hfill & {\mathit{\text{Agent goes left}}}\hfill \\ \hfill {+}& {0.1}{*}{(}\hfill & {0}\hfill & \hfill {+}\hfill & {0.9}{*}{-}{0.1}{)}\hfill & {\mathit{\text{Agent goes up}}}\hfill \\ \hfill {+}& {0.1}{*}{(}\hfill & {-}{1}\hfill & \hfill {+}\hfill & {0.9}{*}{-}{0.1}{)}\hfill & {\mathit{\text{Agent goes right}}}\hfill \\ \hfill {+}& {0.1}{*}{(}\hfill & {0}\hfill & \hfill {+}\hfill & {0.9}{*}{-}{0.1}{)}\hfill & {\mathit{\text{Agent goes down}}}\hfill \end{array}$$ |

which evaluates to 6.173, which is approximated to ${\mathrm{6.2}}$ in ${{V}}_{{\mathrm{2}}}$ above.

The ${\mathrm{+}}{\mathrm{10}}$ reward state has a value less than 10 in ${{V}}_{{\mathrm{2}}}$ because the agent gets flung to one of the corners and these corners look bad at this stage.

After the next step of value iteration, shown on the right-hand side of the figure, the effect of the +10 reward has progressed one more step. In particular, the corners shown get values that indicate a reward in 3 steps.

An applet is available on the book website showing the details of value iteration for this example.

The value iteration algorithm of Figure 9.16 has an array for each stage, but it really only needs to store the current and the previous arrays. It can update one array based on values from the other.

A common refinement of this algorithm is asynchronous value iteration. Rather than sweeping through the states to create a new value function, asynchronous value iteration updates the states one at a time, in any order, and stores the values in a single array. Asynchronous value iteration can store either the $Q[s,a]$ array or the $V[s]$ array. Figure 9.17 shows asynchronous value iteration when the $Q$ array is stored. It converges faster than value iteration and is the basis of some of the algorithms for reinforcement learning. Termination can be difficult to determine if the agent must guarantee a particular error, unless it is careful about how the actions and states are selected. Often, this procedure is run indefinitely as an anytime algorithm where it is always prepared to give its best estimate of the optimal action in a state when asked.

Asynchronous value iteration could also be implemented by storing just the $V[s]$ array. In that case, the algorithm selects a state $s$ and carries out the update:

$$V[s]:=R(s,a)+\gamma *\underset{a}{\mathrm{max}}\sum _{{s}^{\prime}}P({s}^{\prime}\mid s,a)*V[{s}^{\prime}].$$ |

Although this variant stores less information, it is more difficult to extract the policy. It requires one extra backup to determine which action $a$ results in the maximum value. This can be done using

$$\pi [s]:=R(s,a)+\gamma *\mathrm{arg}\underset{a}{\mathrm{max}}\sum _{{s}^{\prime}}P({s}^{\prime}\mid s,a)*V[{s}^{\prime}].$$ |

In Example 9.32, the state one step up and one step to the left of the +10 reward state only had its value updated after three value iterations, in which each iteration involved a sweep through all of the states.

In asynchronous value iteration, the ${\mathrm{+}}{\mathrm{10}}$ reward state can be chosen first. Next, the node to its left can be chosen, and its value will be ${\mathrm{0.7}}{\mathrm{*}}{\mathrm{0.9}}{\mathrm{*}}{\mathrm{10}}{\mathrm{=}}{\mathrm{6.3}}$. Next, the node above that node could be chosen, and its value would become ${\mathrm{0.7}}{\mathrm{*}}{\mathrm{0.9}}{\mathrm{*}}{\mathrm{6.3}}{\mathrm{=}}{\mathrm{3.969}}$. Note that it has a value that reflects that it is close to a ${\mathrm{+}}{\mathrm{10}}$ reward after considering 3 states, not 300 states, as does value iteration.