BackgroundTopIntroduction

Introduction

Probabilistic inference is important for many applications in diagnosis, perception, user modelling, and anywhere there is uncertainty about the state of the world from observations. Unfortunately general probabilistic inference is difficult both computationally and in terms of the number of probabilities that need to be specified. Belief (Bayesian) networks [22] are a representation of independence amongst random variables. They are of interest because the independence is useful in many domains, they allow for compact representations for many practical problems, and there are algorithms to exploit the compact representations. Note that even approximate inference is computationally difficult in the worst case [7].

Recently there has been work to extend belief networks by allowing more structured representations of the conditional probability of a variable given its parents [8]. This has been in terms of either causal independencies [16][34], parametric forms such as sigmoidal Bayesian networks [21][27], or by exploiting contextual independencies inherent in stating the conditional probabilities in terms of rules [23] or trees [29][5]. In this paper we show how an algorithm that exploits conditional independence for efficient inference in belief networks can be extended to also exploit contextual independence. [25] provides an earlier, less efficient, version in terms of rules. [32] give an abstract mathematical analysis of how contextual independence can be exploited in inference.

Section * introduces belief networks and an algorithm, variable elimination (VE) [33] or Bucket Elimination for belief assessment [11], for computing posterior probabilities in belief that is based on nonlinear dynamic programming [2]. Section * presents a representation for conditional probabilities that lets us state contextual independence in terms of confactors. Section * shows how the VE algorithm can be extended to exploit the contextual independence in confactors. Section * shows how we can improve efficiency by reducing the amount of splitting. Section * gives some empirical results on standard and random networks. The details of the experiments are given in Appendix *. Section * gives comparisons to other proposals for exploiting contextual independencies. Section * presents conclusions and future work.


David Poole and Nevin Lianwen Zhang,Exploiting Contextual Independence In Probabilistic Inference, Journal of Artificial Intelligence Research, 18, 2003, 263-313.

BackgroundTopIntroduction