First-order probabilistic inference

By Jacek Kisynski

Probabilistic graphical models, such as belief networks, are a popular tool for representing dependencies between random variables. However, such standard representations are propositional, hence not well suited for describing relations between individuals or quantifying over sets of individuals. First-order logic has the capacity for representing relations and quantification of variables, but it does not treat uncertainty. First representations that mix probability and first-order logic (first-order probabilistic models) were proposed nearly twenty years ago and many first-order probabilistic languages have since emerged. The most common inference technique for such models consists of dynamically grounding the portion of the first-order model that is relevant to the query, then conducting probabilistic inference procedures at the grounded, propositional level. The problem is that these grounded models may be extremely large, rendering inference intractable even for very simple first-order models.

In last five years there was a significant progress in solving difficult problem of inference directly at first-order level (first-order probabilistic inference). In my talk I will give an overview of work on exact first-order probabilistic inference by various researchers.

Go to the LCI Forum page