Intelligence 2E

foundations of computational agents

Rather than choosing the most likely model or delineating the set of all models that are consistent with the training data, another approach is to compute the posterior probability of each model given the training examples.

The idea of Bayesian learning is to compute the posterior probability distribution of the target features of a new example conditioned on its input features and all the training examples.

Suppose a new case has inputs $X=x$ (which we write simply as $x$) and target features $Y$. The aim is to compute $P(Y\mid x\wedge E)$, where $E$ is the set of training examples. This is the probability distribution of the target variables given the particular inputs and the examples. The role of a model is to be the assumed generator of the examples. If we let $M$ be a set of disjoint and covering models, then reasoning by cases and the chain rule give

$P(Y\mid x\wedge E)$ | $={\displaystyle \sum _{m\in M}}P(Y\wedge m\mid x\wedge E)$ | ||

$={\displaystyle \sum _{m\in M}}P(Y\mid m\wedge x\wedge E)*P(m\mid x\wedge E)$ | |||

$={\displaystyle \sum _{m\in M}}P(Y\mid m\wedge x)*P(m\mid E).$ |

The first two equalities follow from the definition of conditional probability. The last equality relies on two assumptions: the model includes all the information about the examples that is necessary for a particular prediction, $P(Y\mid m\wedge x\wedge E)=P(Y\mid m\wedge x)$, and the model does not change depending on the inputs of the new example, $P(m\mid x\wedge E)=P(m\mid E)$. Instead of choosing the best model, Bayesian learning relies on model averaging, averaging over the predictions of all the models, where each model is weighted by its posterior probability given the training examples.

$P(m\mid E)$ can be computed using Bayes’ rule:

$$P(m\mid E)=\frac{P(E\mid m)*P(m)}{P(E)}.$$ |

Thus, the weight of each model depends on how well it predicts the data (the likelihood) and its prior probability. The denominator, $P(E)$, is a normalizing constant to make sure the posterior probabilities of the models sum to 1. $P(E)$ is called the partition function. Computing $P(E)$ may be very difficult when there are many models.

A set $\{{e}_{1},\mathrm{\dots},{e}_{k}\}$ of examples are independent and identically distributed (i.i.d.), given model $m$ if examples ${e}_{i}$ and ${e}_{j}$, for $i\ne j$, are independent given $m$. If the set of training examples $E$ is $\{{e}_{1},\mathrm{\dots},{e}_{k}\}$, the assumption that the examples are i.i.d. implies

$$P(E\mid m)=\prod _{i=1}^{k}P({e}_{i}\mid m).$$ |

The i.i.d. assumption can be represented as a belief network, shown in Figure 10.10, where each of the ${e}_{i}$ are independent given model $m$.

If $m$ is made into a discrete variable, any of the inference methods of the previous chapter could be used for inference in this network. A standard reasoning technique in such a network is to condition on every observed ${e}_{i}$ and to query the model variable or an unobserved ${e}_{i}$ variable.

The set of models may include structurally different models in addition to models that differ in the values of the parameters. One of the techniques of Bayesian learning is to make the parameters of the model explicit and to determine the distribution over the parameters.

Consider the simplest learning task of learning a single Boolean random variable, ${Y}$, with no input features. (This is the case covered in Section 7.2.3.) Each example specifies ${Y}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$ or ${Y}{\mathrm{=}}{f}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{s}{\mathit{}}{e}$. The aim is to learn the probability distribution of ${Y}$ given the set of training examples.

There is a single parameter, ${\varphi}$, that determines the set of all models. Suppose that ${\varphi}$ represents the probability of ${Y}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$. We treat ${\varphi}$ as a real-valued random variable on the interval ${\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{1}}{\mathrm{]}}$. Thus, by definition of ${\varphi}$, ${P}{\mathrm{(}}{Y}{\mathrm{=}}{t}{r}{u}{e}{\mathrm{\mid}}{\varphi}{\mathrm{)}}{\mathrm{=}}{\varphi}$ and ${P}{\mathrm{(}}{Y}{\mathrm{=}}{f}{a}{l}{s}{e}{\mathrm{\mid}}{\varphi}{\mathrm{)}}{\mathrm{=}}{\mathrm{1}}{\mathrm{-}}{\varphi}$.

Suppose, first, an agent has no prior information about the probability of Boolean variable ${Y}$ and no knowledge beyond the training examples. This ignorance can be modeled by having the prior probability distribution of the variable ${\varphi}$ as a uniform distribution over the interval ${\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{1}}{\mathrm{]}}$. This is the probability density function labeled ${{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{0}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{0}}$ in Figure 10.11.

We can update the probability distribution of ${\varphi}$ given some examples. Assume that the examples, obtained by running a number of independent experiments, are a particular sequence of outcomes that consists of ${{n}}_{{\mathrm{0}}}$ cases where ${Y}$ is false and ${{n}}_{{\mathrm{1}}}$ cases where ${Y}$ is true.

The posterior distribution for ${\varphi}$ given the training examples can be derived by Bayes’ rule. Let the examples ${E}$ be the particular sequence of observations that resulted in ${{n}}_{{\mathrm{1}}}$ occurrences of ${Y}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$ and ${{n}}_{{\mathrm{0}}}$ occurrences of ${Y}{\mathrm{=}}{f}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{s}{\mathit{}}{e}$. Bayes’ rule gives us

$${P}{(}{\varphi}{\mid}{E}{)}{=}\frac{{P}{(}{E}{\mid}{\varphi}{)}{*}{P}{(}{\varphi}{)}}{{P}{}{(}{E}{)}}{.}$$ |

The denominator is a normalizing constant to make sure the area under the curve is 1.

Given that the examples are i.i.d.,

$${P}{(}{E}{\mid}{\varphi}{)}{=}{{\varphi}}^{{{n}}_{{1}}}{*}{{(}{1}{-}{\varphi}{)}}^{{{n}}_{{0}}}$$ |

because there are ${{n}}_{{\mathrm{0}}}$ cases where ${Y}{\mathrm{=}}{f}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{s}{\mathit{}}{e}$, each with a probability of ${\mathrm{1}}{\mathrm{-}}{\varphi}$, and ${{n}}_{{\mathrm{1}}}$ cases where ${Y}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$, each with a probability of ${\varphi}$.

Note that ${E}$ is the particular sequence of observations made. If the observation was just that there were a total of ${{n}}_{{\mathrm{0}}}$ occurrences of ${Y}{\mathrm{=}}{f}{\mathit{}}{a}{\mathit{}}{l}{\mathit{}}{s}{\mathit{}}{e}$ and ${{n}}_{{\mathrm{1}}}$ occurrences of ${Y}{\mathrm{=}}{t}{\mathit{}}{r}{\mathit{}}{u}{\mathit{}}{e}$, we would get an different answer, because we would have to take into account all the possible sequences that could have given this count. The latter is known as the binomial distribution.

One possible prior probability, ${P}{\mathit{}}{\mathrm{(}}{\varphi}{\mathrm{)}}$, is a uniform distribution on the interval ${\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{1}}{\mathrm{]}}$. This would be reasonable when the agent has no prior information about the probability.

Figure 10.11 gives some posterior distributions of the variable ${\varphi}$ based on different sample sizes, and given a uniform prior. The cases are ${\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}1}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}2}}{\mathrm{)}}$, ${\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}2}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}4}}{\mathrm{)}}$, and ${\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}4}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{\hspace{0.17em}8}}{\mathrm{)}}$. Each of these peak at the same place, namely at $\frac{{\mathrm{2}}}{{\mathrm{3}}}$. More training examples make the curve sharper.

The distribution of this example is known as the beta distribution; it is parameterized by two counts, ${\alpha}_{0}$ and ${\alpha}_{1}$, and a probability $p$. Traditionally, the ${\alpha}_{i}$ parameters for the beta distribution are one more than the counts; thus, ${\alpha}_{i}={n}_{i}+1$. The beta distribution is

$$Bet{a}^{{\alpha}_{0},{\alpha}_{1}}(p)=\frac{1}{Z}{p}^{{\alpha}_{1}-1}*{(1-p)}^{{\alpha}_{0}-1}$$ |

where $Z$ is a normalizing constant that ensures the integral over all values is 1. Thus, the uniform distribution on $[0,1]$ is the beta distribution $Bet{a}^{1,1}$.

Suppose instead that $Y$ is a discrete variable with $k$ different values. The generalization of the beta distribution to cover this case is known as the Dirichlet distribution. The Dirichlet distribution with two sorts of parameters, the “counts” ${\alpha}_{1},\mathrm{\dots},{\alpha}_{k}$, and the probability parameters ${p}_{1},\mathrm{\dots},{p}_{k}$, is

$$Dirichle{t}^{{\alpha}_{1},\mathrm{\dots},{\alpha}_{k}}({p}_{1},\mathrm{\dots},{p}_{k})=\frac{1}{Z}\prod _{j=1}^{k}{p}_{j}^{{\alpha}_{j}-1}$$ |

where ${p}_{i}$ is the probability of the $i$th outcome (and so $0\le {p}_{i}\le 1$) and ${\alpha}_{i}$ is a non-negative real and $Z$ is a normalizing constant that ensures the integral over all the probability values is 1. We can think of ${a}_{i}$ as one more than the count of the $i$th outcome, ${\alpha}_{i}={n}_{i}+1$. The Dirichlet distribution looks like Figure 10.11 along each dimension (i.e., as each ${p}_{j}$ varies between 0 and 1).

For many cases, averaging over all models weighted by their posterior distribution is difficult, because the models may be complicated (e.g., if they are decision trees or even belief networks). For the Dirichlet distribution, the expected value for outcome $i$ (averaging over all ${p}_{j}$) is

$$\frac{{\alpha}_{i}}{{\sum}_{j}{\alpha}_{j}}.$$ |

The reason that the ${\alpha}_{i}$ parameters are one more than the counts in the definitions of the beta and Dirichlet distributions is to make this formula simple. This fraction is well defined only when the ${\alpha}_{j}$ are all non-negative and not all are zero.

Consider Example 10.14, which determines the value of ${\varphi}$ based on a sequence of observations made up of ${{n}}_{{\mathrm{0}}}$ cases where ${Y}$ is false and ${{n}}_{{\mathrm{1}}}$ cases where ${Y}$ is true. Consider the posterior distribution as shown in Figure 10.11. What is interesting about this is that, whereas the most likely posterior value of ${\varphi}$ is $\frac{{{n}}_{{\mathrm{1}}}}{{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}}$, the expected value of this distribution is $\frac{{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{1}}}{{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{2}}}$.

Thus, the expected value of the ${{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{1}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{2}}$ curve is $\frac{{\mathrm{3}}}{{\mathrm{5}}}$, for the ${{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{2}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{4}}$ case the expected value is $\frac{{\mathrm{5}}}{{\mathrm{8}}}$, and for the ${{n}}_{{\mathrm{0}}}{\mathrm{=}}{\mathrm{4}}{\mathrm{,}}{{n}}_{{\mathrm{1}}}{\mathrm{=}}{\mathrm{8}}$ case it is $\frac{{\mathrm{9}}}{{\mathrm{14}}}$. As the learner gets more training examples, this value approaches $\frac{{n}}{{m}}$.

This estimate is better than $\frac{{n}}{{m}}$ for a number of reasons. First, it tells us what to do if the learning agent has no examples: use the uniform prior of $\frac{{\mathrm{1}}}{{\mathrm{2}}}$. This is the expected value of the ${n}{\mathrm{=}}{\mathrm{0}}{\mathrm{,}}{m}{\mathrm{=}}{\mathrm{0}}$ case. Second, consider the case where ${n}{\mathrm{=}}{\mathrm{0}}$ and ${m}{\mathrm{=}}{\mathrm{3}}$. The agent should not use ${P}{\mathit{}}{\mathrm{(}}{y}{\mathrm{)}}{\mathrm{=}}{\mathrm{0}}$, because this says that ${Y}$ is impossible, and it certainly does not have evidence for this! The expected value of this curve with a uniform prior is $\frac{{\mathrm{1}}}{{\mathrm{5}}}$.

An agent does not have to start with a uniform prior; it could start with any prior distribution. If the agent starts with a prior that is a Dirichlet distribution, its posterior will be a Dirichlet distribution. The posterior distribution can be obtained by adding the observed counts to the ${\alpha}_{i}$ parameters of the prior distribution.

Thus, the beta and Dirichlet distributions provide a justification for using pseudocounts for estimating probabilities. The pseudocount represents the prior knowledge. A flat prior gives a pseudocount of 1. Thus, Laplace smoothing can be justified in terms of making predictions from initial ignorance.

In addition to using the posterior distribution of $\varphi $ to derive the expected value, we can use it to answer other questions such as: What is the probability that the posterior probability, $\varphi $, is in the range $[a,b]$? In other words, derive $P((\varphi \ge a\wedge \varphi \le b)\mid e)$. This is the problem that the Reverend Thomas Bayes solved more than 250 years ago [Bayes, 1763]. The solution he gave – although in much more cumbersome notation – was

$$\frac{{\int}_{a}^{b}{p}^{n}*{(1-p)}^{m-n}}{{\int}_{0}^{1}{p}^{n}*{(1-p)}^{m-n}}.$$ |

This kind of knowledge is used in surveys when it may be reported that
a survey is correct with an error of at most $5\%$, $19$ times out of $20$. It is also the
same type of information that is used by probably approximately correct (PAC) learning, which guarantees an error at most
$\u03f5$ at least $1-\delta $ of the time. If an agent chooses the
midpoint of the range $[a,b]$, namely $\frac{a+b}{2}$, as its
hypothesis, it will have error less than or equal to $\frac{b-a}{2}$,
just when the hypothesis is in $[a,b]$. The value $1-\delta $ corresponds to
$P(\varphi \ge a\wedge \varphi \le b\mid e)$. If
$\u03f5=\frac{b-a}{2}$ and $\delta =1-P(\varphi \ge a\wedge \varphi \le b\mid e)$, choosing the midpoint will result in an error at most
$\u03f5$ in $1-\delta $ of the time. PAC learning gives worst-case results, whereas
Bayesian learning gives the expected number. Typically, the Bayesian
estimate is more accurate, but the PAC results give a guarantee of a
bound on
the error. The sample complexity, the
number of samples required to obtain some given accuracy,
for Bayesian learning is typically much less than that of
PAC learning – many fewer examples are required to *expect to achieve* the
desired accuracy than are needed to *guarantee* the desired accuracy.