next up previous
Next: Conditional Probabilities of Up: Exploiting Causal Independence in Previous: VE versus Clique

Causal Independence

  Bayesian networks place no restriction on how a node depends on its parents. Unfortunately this means that in the most general case we need to specify an exponential (in the number of parents) number of conditional probabilities for each node. There are many cases where there is structure in the probability tables that can be exploited for both acquisition and for inference. One such case that we investigate in this paper is known as `causal independence'.

In one interpretation, arcs in a BN represent causal relationships; the parents of a variable e are viewed as causes that jointly bear on the effect e. Causal independence refers to the situation where the causes contribute independently to the effect e.

More precisely, are said to be causally independent w.r.t. effect e if there exist random variables that have the same frame, i.e., the same set of possible values, as e such that

  1. For each i, probabilistically depends on and is conditionally independent of all other 's and all other 's given , and

  2. There exists a commutative and associative binary operator over the frame of e such that .
Using the independence notion of Pearl [28], let mean that X is independent of Y given Z, the first condition is:

and similarly for the other variables. This entails and for each and where .

We refer to as the contribution of to e. In less technical terms, causes are causally independent w.r.t. their common effect if individual contributions from different causes are independent and the total influence on the effect is a combination of the individual contributions.

We call the variable e a convergent variable as it is where independent contributions from different sources are collected and combined (and for the lack of a better name). Non-convergent variables will simply be called regular variables. We call the base combination operator of e.

The definition of causal independence given here is slightly different than that given by Heckerman and Breese [16] and Srinivas [34]. However, it still covers common causal independence models such as noisy OR-gates [14,28], noisy MAX-gates [10], noisy AND-gates, and noisy adders [6] as special cases. One can see this in the following examples.

The following example is not an instance of any causal independence models that we know:

In the traditional formulation of a Bayesian network we need to specify an exponential, in the number of parents, number of conditional probabilities for a variable. With causal independence, the number of conditional probabilities is linear in m. This is why causal independence can reduce complexity of knowledge acquisition [17,28,26,25]. In the following sections we show how causal independence can also be exploited for computational gain.





next up previous
Next: Conditional Probabilities of Up: Exploiting Causal Independence in Previous: VE versus Clique



David Poole
Fri Dec 6 15:09:32 PST 1996