Intelligence 2E

foundations of computational agents

For many of the prediction measures, the optimal prediction on the training data is the mean (the average value). In the case of Boolean data (assuming true is represented as 1, and false as 0), the mean can be interpreted as a probability. However, the empirical mean, the mean of the training set, is typically not a good estimate of the probability of new cases. For example, just because an agent has not observed some value of a variable does not mean that the value should be assigned a probability of zero, which means it is impossible. Similarly, if we are to make predictions for the future grades of a student, the average grade of a student may be appropriate to predict the future grades of the student if the student has taken many courses, but may not be appropriate for a student with just one grade recorded, and is not appropriate for a student with no grades recorded (where the average is undefined).

A simple way both to solve the zero-probability problem and to take prior knowledge into account is to use a real-valued pseudocount or prior count to which the training data is added.

Suppose the examples are values ${v}_{1}\mathrm{\dots}{v}_{n}$ and you want to make a prediction for the next $v$, which we will write as $\widehat{v}$.

One prediction is the average. Suppose ${a}_{n}$ is the average of the first $n$ values, then:

${a}_{n}$ | $={\displaystyle \frac{{v}_{1}+\mathrm{\dots}+{v}_{n-1}+{v}_{n}}{n}}$ | ||

$={\displaystyle \frac{n-1}{n}}*{a}_{n-1}+{\displaystyle \frac{{v}_{n}}{n}}$ | |||

$={a}_{n-1}+{\displaystyle \frac{{v}_{n}-{a}_{n-1}}{n}}.$ |

The running average keeps the current average of all of the data points seen. It can be implemented by storing the current average, $a$, and the number of values seen, $n$. When a new value $v$ arrives, $n$ is incremented and $(v-a)/n$ is added to $a$.

When $n=0$ assume you use prediction ${a}_{0}$ (which you cannot get from data as there are no data for this case). A prediction that takes into account regression to the mean is to use:

$$\widehat{v}=\frac{{v}_{1}+\mathrm{\dots}+{v}_{n}+c*{a}_{0}}{n+c}$$ |

where $c$ is a constant, which is the pseudocount of the number of assumed fictional data points. If $c=0$, the prediction is the average value. The value of $c$ can control the amount of regression to the mean. This can be implemented using the running average by initializing $a$ with ${a}_{0}$ and $n$ with $c$.

Consider how to better estimate the ratings of restaurants in Example 7.14. The aim is to predict the average rating over the test data, not the average rating of the seen ratings.

You can use the existing data about other restaurants to make estimates about the new cases, assuming that the new cases are like the old. Before seeing anything, it may be reasonable to use the average rating of the restaurants as value for ${{a}}_{{\mathrm{0}}}$. This would be like assuming that a new restaurant is like an average restaurant (which may or may not be a good assumption). Suppose you are most interested in being accurate for top-rated restaurants. To estimate ${c}$, consider a restaurant with a single 5-star rating. You could expect this restaurant to be like the other restaurants with a 5-star rating. Let ${{a}}^{{\mathrm{\prime}}}$ be the average rating of the restaurants with a 5-star rating (where the average is weighted by the number of ratings of 5 stars each restaurant has). Then you would expect that a restaurant with a single 5-star rating would be like the others and have this rating, and so ${{a}}^{{\mathrm{\prime}}}{\mathrm{=}}{{a}}_{{\mathrm{0}}}{\mathrm{+}}{\mathrm{(}}{\mathrm{5}}{\mathrm{-}}{{a}}_{{\mathrm{0}}}{\mathrm{)}}{\mathrm{/}}{\mathrm{(}}{c}{\mathrm{+}}{\mathrm{1}}{\mathrm{)}}$. You can then solve for ${c}$.

Suppose the average rating is ${\mathrm{3}}$, and the average rating for the restaurants with a 5-star rating is 4.5. Solving ${\mathrm{4.5}}{\mathrm{=}}{\mathrm{3}}{\mathrm{+}}{\mathrm{(}}{\mathrm{5}}{\mathrm{-}}{\mathrm{3}}{\mathrm{)}}{\mathrm{/}}{\mathrm{(}}{c}{\mathrm{+}}{\mathrm{1}}{\mathrm{)}}$ gives ${c}{\mathrm{=}}{\mathrm{1}}{\mathrm{/}}{\mathrm{3}}$. If the average for 5-star restaurants was instead 3.5, then ${c}$ would be 3. See Exercise 12.

Consider the following thought experiment (or, better yet, implement it). First select a number ${p}$ randomly from the range ${\mathrm{[}}{\mathrm{0}}{\mathrm{,}}{\mathrm{1}}{\mathrm{]}}$. Suppose this is the ground truth for the probability that ${Y}{\mathrm{=}}{\mathrm{1}}$ for a variable ${Y}$ with domain ${\mathrm{\{}}{\mathrm{0}}{\mathrm{,}}{\mathrm{1}}{\mathrm{\}}}$. Then generate ${n}$ training examples with ${P}{\mathrm{(}}{Y}{\mathrm{=}}{\mathrm{1}}{\mathrm{)}}{\mathrm{=}}{p}$, for a number for values of ${n}$, such as ${\mathrm{1}}{\mathrm{,}}{\mathrm{2}}{\mathrm{,}}{\mathrm{3}}{\mathrm{,}}{\mathrm{4}}{\mathrm{,}}{\mathrm{5}}{\mathrm{,}}{\mathrm{10}}{\mathrm{,}}{\mathrm{20}}{\mathrm{,}}{\mathrm{100}}{\mathrm{,}}{\mathrm{1000}}$. Let ${{n}}_{{\mathrm{1}}}$ be the number of samples with ${Y}{\mathrm{=}}{\mathrm{1}}$ and so there are ${{n}}_{{\mathrm{0}}}{\mathrm{=}}{n}{\mathrm{-}}{{n}}_{{\mathrm{1}}}$ samples with ${Y}{\mathrm{=}}{\mathrm{0}}$. The learning problem for this scenario is: from ${{n}}_{{\mathrm{0}}}$ and ${{n}}_{{\mathrm{1}}}$ create an estimator $\widehat{{p}}$ that can be used to predict new cases. Then generate some (e.g., 100) test cases from the same ${p}$. The aim is to produce the estimator $\widehat{{p}}$ with the smallest error on the test cases. If you repeat this 1000 times, you will get a good idea of what is going on.

If you try this, with log-likelihood you will find that $\widehat{{p}}{\mathrm{=}}{{n}}_{{\mathrm{1}}}{\mathrm{/}}{\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}{\mathrm{)}}$ works very poorly; one reason is that if either ${{n}}_{{\mathrm{0}}}$ or ${{n}}_{{\mathrm{1}}}$ is 0, and that value appears in the test set, the likelihood of the test set will be 0, which is the worst it could possibly be! It turns out that Laplace smoothing, defined by $\widehat{{p}}{\mathrm{=}}{\mathrm{(}}{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{1}}{\mathrm{)}}{\mathrm{/}}{\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{2}}{\mathrm{)}}$, has the maximum likelihood of all estimators on the test set. $\widehat{{p}}{\mathrm{=}}{\mathrm{(}}{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{1}}{\mathrm{)}}{\mathrm{/}}{\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}{\mathrm{+}}{\mathrm{2}}{\mathrm{)}}$ also works better than $\widehat{{p}}{\mathrm{=}}{{n}}_{{\mathrm{1}}}{\mathrm{/}}{\mathrm{(}}{{n}}_{{\mathrm{0}}}{\mathrm{+}}{{n}}_{{\mathrm{1}}}{\mathrm{)}}$ for sum-of-squares error.

If you were to select ${p}$ from some distribution other than the uniform distribution, adding 1 to the numerator and 2 from the denominator may not result in the best predictor.