I am interested in the question: what should an intelligent agent do?

What does an agent do?

Papers that should be followed up on

Existential and Existence Uncertainty

Most models of entities and relation assume that we already know what entities exist, and whether two descriptions refer to the same or different individuals. I still think my AAAI-2007 paper is a good idea, and presents the most coherent model of existence and identity I know of. Existence is not a property of an entity; when existence is false, there is no entity to have properties. Only existing entities have properties. Similarly identity (equality) is not a property of two entities. Existence is a properties of descriptions. There might be multiple entities that fit a description.

Missing Data

Most models of missing data require us to reason about what is missing, but we really need lightweight models where missing data is ignored. This is particularly true for relational data where almost all possible data is missing. Our paper in Artemiss 2020 gives some first idea. I am surprised that LR+/- is not a common method (perhaps it is, please let me know if you have an older reference).

Aggregation in Relational Models

Aggregation occurs in relational models and graph neural networks. It occurs when an entity depends on a population of other entities. For example, our AAAI-2015 paper investigates how adding aggregators to MLN always has side effects. Our SUM 2014 paper investigates asymptotic properties of various aggregators. Most aggregators have peculiar properties, and none work well, still! Think about what happens as the number of related individuals ranges from zero to infinity.

Ontologies and Uncertainty

Ontologies define the vocabulary of an information system. Ontologies come before data; what is in the data depends on the ontology. In most cases, what is not represented in the data cannot be recovered. E.g., treating a sequence of data as a set, loses information, as does ignoring the actual time, or the location or the myriad of other possible data that could have been collected. We need to incorporate a the diversity of possible meta-information -- including ontologies -- when making predictions about data. E.g., our 2009 paper in IEEE Intelligent systems shows relationships between triples and random variables. Our StarAI-2013 paper sketches some ideas on integrating ontologies and StarAI models (see also Kou et al, IJCAI-2013), and our AAAI-2019 paper gives some preliminary work on embedding-based models. Definitions are useful, and we shouldn't ignore that they exist.

Old stuff

There are some (very old) detailed stories on:

I also have a list of all of my papers (many online) and some (old) online code.

Some co-authors who have web pages: Craig Boutilier, Ronen Brafman, Randy Goebel, Peter Gorniak, Holger Hoos, Michael Horsch, Alan Mackworth, Yang Xiang, Nevin Zhang .