Learning First-Order Probabilistic Models
by Michael Chiang
Models used in machine learning have for the most part predicates on the i.i.d.
assumption about data, which is known to be inappropriate for "relational"
domains. In relational domains, objects are often related to other objects in a
qualitative way, and to uncover such relationships we require a modelling
language expressive enough to describe them.
First-order logic and probabilistic extensions thereof cater specifically to
such problems, and learning models in these expressive languages is currently
the subject of much investigation. In this talk I will give an overview of
relational problems, the role of first-order logic and probabilistic first-order
logic, and some issues in representation and learning. I will also describe my
current work towards the problem.
Back to the LCI Forum page