Generating and Evaluating Evaluative Arguments

Giuseppe Carenini (2000)


Thesis Committee : Alan Lesgold, Johanna Moore (chair), Steve Roth, Rich Thomason, James Voss


Abstract

Evaluative arguments are pervasive in natural human communication. In countless situations, people attempt to advise or persuade their interlocutors that something is good (vs. bad) or right (vs. wrong). With the proliferation of on-line systems serving as personal advisors and assistants, there is a pressing need to develop general and testable computational models of generating and presenting evaluative arguments.

Previous research on generating evaluative arguments has been characterized by two major limitations. First, because of the complexity of generating natural language, researchers have tended to focus only on specific aspects of the generation process. Second, because of lack of systematic evaluation, it is frequently difficult to gauge the scalability and robustness of proposed approaches

The research presented in this thesis addresses both limitations. By following principles from argumentation theory and computational linguistics, we have developed a complete computational model for generating evaluative arguments. In our model, all aspects of the generation process are covered in a principle way, from selecting and organizing the content of the argument, to expressing the selected content into natural language. For content selection and organization, we devised an argumentation strategy based on guidelines from argumentation theory. For expressing the content into natural language, we extended and integrated previous work on generating evaluative arguments. The key knowledge source for both tasks is a quantitative model of user preferences.

To empirically test critical aspects of our generation model, we have devised and implemented an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. The design of the evaluation framework was based on principles and techniques from several different fields, including computational linguistics, social psychology, decision theory and human computer interaction.

Within the framework, we have performed an experiment to test two basic assumptions on which the design of the computational model is based; namely, that tailoring an evaluative argument to a model of the addressee's preferences increases its effectiveness, and that differences in conciseness significantly influence argument effectiveness. Both assumptions were confirmed in the experiment.

A key goal of this work was to complete the research cycle that begins with developing a computational model, devising techniques to evaluate the model, and applying these techniques to actually evaluate aspects of the model. Because of the complexity of the issues involved, to achieve this goal in the time allotted, we had to limit our investigation in several ways. First, with the exception of the argumentation strategy, this thesis has examined only evaluative arguments about a single entity. Second, we have adopted a simple linear model of user's preferences. Third, the findings of our study are restricted to written textual arguments. Clearly, all these limitations are open doors for future research.



Please, send comments and inquiries to carenini@cs.ubc.ca