# 8.1.4 Expected Values

The expected value of a numerical function on worlds is the function’s average value, averaged over all possible worlds.

Let $f$ be a function on worlds. $f$ could select the value of one of the random variables, it could be the number of bits used to describe the world, or it could be some measure of how much an agent likes the world.

The expected value of $f$, written ${{\mathcal{E}}_{P}(f)}$, with respect to probability $P$ is

 $\displaystyle{\mathcal{E}}_{P}(f)=$ $\displaystyle\sum_{\omega\in\Omega}f(\omega)*P(\omega)$

One special case is if $\alpha$ is a proposition, and $f$ is the function that has value 1 when $\alpha$ is true, and 0 otherwise, then ${\mathcal{E}}_{P}(f)=P(\alpha)$.

###### Example 8.9.

In an electrical domain, if $number\_of\_broken\_switches$ is the number of switches broken,

 ${{\mathcal{E}}_{P}(number\_of\_broken\_switches)}$

would give the expected number of broken switches given by probability distribution $P$. If the world acted according to the probability distribution $P$, this would give the long-run average number of broken switches. If there were three switches, each with a probability of $0.7$ being broken, the expected number of broken switches is:

 $\displaystyle 0*0.3^{3}+1*3*0.7*0.3^{2}+2*3*0.7^{2}*0.3+3*0.7^{3}=2.01$

where $3$ is in the middle two products because there are 3 worlds with 1 switch broken, and 3 worlds with 2 switches broken.

In a manner analogous to the semantic definition of conditional probability, the conditional expected value of $f$ conditioned on evidence $e$, written ${\mathcal{E}}(f\mid e)$, is

 $\displaystyle{\mathcal{E}}(f\mid e)=$ $\displaystyle\sum_{\omega\in\Omega}f(\omega)*P(\omega\mid e).$
###### Example 8.10.

The expected number of broken switches given that light $l_{1}$ is not lit is given by

 ${\mathcal{E}}(number\_of\_broken\_switches\mid\neg lit(l_{1})).$

This is obtained by averaging the number of broken switches over all of the worlds in which light $l_{1}$ is not lit.

If a variable is Boolean, with $true$ represented as 1 and $false$ as 0, the expected value is the probability of the variable. Thus any algorithms for expected values can also be used to compute probabilities, and any theorems about expected values are also directly applicable to probabilities.