Although there is a large body of work concerning abductive reasoning techniques, it deals largely with the application of these techniques to recognition tasks. Our own approach to recognition is most closely related to weighted abduction as presented by Appelt et al. , which also nicely summarizes other work in the field. They are, like us, motivated by a desire to find the best explanation for the observations. They use their system for recognition, and do not mention its potential use for design.
Moore et al.  point out that ``in practice, the completeness and accuracy of a user model cannot be guaranteed,'' and recognize the importance of user feedback in overcoming both inaccuracy and incompleteness. Jameson  tells us that ``user modeling is an inherently speculative and error-prone enterprise,'' and that users should therefore be aware of the system's model of the user as well as of the system's modelling competence. The interaction paradigm advocated in this paper addresses these problems.
Feiner and McKeown  discuss the design of presentations which emphasize the coordination of text and graphics. Note that they synthesize their presentations from underlying models, whereas we assemble ours from pre-existing material.
The WIP system developed at the German Center for Artificial Intelligence (see, for instance, Wahlster et al. ) investigates the on-line integration of information from multiple sources into coherent multimedia presentations. Simple user categories are used to guide the presentation, which, like Feiner and McKeown's, are synthesized from underlying representations.
Goodman  describes a system that provides multimedia explanations in intelligent training environments. The help component of his system automatically sequences video segments based upon the trainee's plan. Goodman points out that the use of canned video is most effective in an ``introductory overview of a domain,'' a description which suits our application.
The work introduced in this paper thrusts in two directions. First, we consider the application of abductive reasoning techniques to both recognition and design tasks (see also Csinger et al. ) in a single framework. Then, we investigate how to use these techniques to implement working presentation systems that model their users (see also Csinger et al. ).
We now extend the language of Theorist to include probabilities and costs, and we alter the notion of explanation to reflect a combination of design and recognition that has not been made explicit in previous work.
We partition the set of assumables into the set of those available for recognition and the set available for design. Each assumable in has associated with it a prior probability . is partitioned into disjoint and covering sets which correspond to independent random variables (as in Poole ). Every assumable in is assigned a positive cost . This quantity can be interpreted in a number of implementation-dependent ways: it could be an estimate, for instance, of how hard it is for the system to realize the design element, or of how much cognitive or perceptual effort will be required from a human to comprehend some manifestation of the design element. Examples will be found later in this article.
The partitioning of partitions each explanation of into a model and a design component which we denote as . We define a preference relation over explanations such that:
Which results in a lexicographic ordering of explanations. So, the best explanation consists of the most plausible model of the user and the lowest cost presentation.
Note that the design which explains presentation in for some model may not be generated for some other model . The logic is a means of ``weeding out'' incoherent designs, and hence presentations.
A single abductive reasoning engine is employed for both recognition of the user model, and for design of the presentation. Design and recognition are interleaved, in the sense that the rule being applied by the reasoner could call at any point for the assumption of either a design or a recognition assumable; a partial model and a partial design are accumulated until either the proof is complete, or some other explanation becomes preferred, whereupon the current proof is suspended and the proof corresponding to the now preferred explanation is continued. Interleaving design and abduction in this way has the further advantage that consistent but irrelevant assumptions are not made; only the assumptions required by the design and model currently under consideration are made.
Separating the assumables for model recognition from the assumables for presentation design not only helps knowledge engineers express what they really mean, but has interesting ramifications in the way presentations are chosen; in particular, using means that we do not give up good models for which we can find only bad designs. For instance, consider the case where we have disjoint assumables and , where , but the lowest cost design in the context of a model that assumes the user is a student is greater than the one in the context of a model that assumes the user is a faculty member (i.e., . We do not give up the assumption that the user is a student; the reasons for deciding in favor of are not affected by the system's inability to find a good (low-cost) presentation.
Assumptions we have made to this point include: 1) the space of models covers the range of expected usage (i.e., no oceanographers will try to use an expert system for medical imaging), 2) observations of users are all mutually consistent (i.e., users are neither deliberately misleading the system nor supplying erroneous data or random input). These assumptions are obviously questionable for real systems in which user behavior may be only partly predictable and explainable, but they simplify the exposition of our ideas in this paper.