Introduction |

Recently there has been work to extend belief networks by allowing more structured representations of the conditional probability of a variable given its parents [8]. This has been in terms of either causal independencies [16][34], parametric forms such as sigmoidal Bayesian networks [21][27], or by exploiting contextual independencies inherent in stating the conditional probabilities in terms of rules [23] or trees [29][5]. In this paper we show how an algorithm that exploits conditional independence for efficient inference in belief networks can be extended to also exploit contextual independence. [25] provides an earlier, less efficient, version in terms of rules. [32] give an abstract mathematical analysis of how contextual independence can be exploited in inference.

Section * introduces belief networks and an
algorithm, variable elimination (VE) [33] or Bucket
Elimination for belief assessment [11], for computing
posterior probabilities in belief that is based on nonlinear dynamic
programming [2]. Section * presents a
representation for conditional probabilities that lets us state
contextual independence in terms of *confactors*. Section
* shows how the VE algorithm can be extended to
exploit the contextual independence in confactors. Section
* shows how we can improve efficiency by reducing the
amount of splitting. Section
* gives some empirical results on standard and random
networks. The details of the experiments are given in Appendix *. Section * gives comparisons
to other proposals for exploiting contextual independencies. Section
* presents conclusions and future work.

David Poole and Nevin Lianwen Zhang,Exploiting Contextual Independence In Probabilistic Inference, Journal of Artificial Intelligence Research, 18, 2003, 263-313.

Introduction |