Intelligent User Interfaces research at UBC
Our long-term goal is to integrate research in Artificial Intelligence, Human Computer Interaction and Cognitive Science, to devise novel interfaces that can capture and adapt to specific user needs and preferences. Research topics include user models for cognitive and affective states, preference modeling and elicitation, interactive decision making, intelligent tutoring systems, user-adaptive information visualization and visual interfaces for text analysis.
We focus on the following research areas in particular (but not limited to):
The goal of this project is to design information visualization systems that can adapt to the specific needs of each individual viewer. Toward this goal, we are exploring data sources that could help detect these needs in real-time, including cognitive measures that impact perceptual abilities, interaction logs, as well as data from eye-tracking and physiological sensors. We are also investigating how to provide usable and non-intrusive user-adaptive interventions such as suggesting an alternative visualization if the current one seems to be unsuitable for the viewer, and/or providing interactive help on the current visualization (e.g, drawing attention to specific areas of interest). See our demos here.
Many critical decisions for individuals and organizations are often framed as preferential choices: the process of selecting the best option out of a set of alternatives. We are investigating interactive visualization techniques to support preferential choice. The design of our interfaces is grounded in detailed task models. In testing our interfaces, we typically measure both task performance and insights. See our demo for ValueCharts, one of our interface for decision making support.
In many decision-making scenarios, ranging from choosing what camera to buy, to voting in an election, people can benefit from knowing what other people's opinions are. As more and more documents expressing opinions are posted on the Web, extracting, summarizing and visualizing these useful resources becomes a critical task for many organizations and individuals.
We proposed a framework to model user's learning in Interactive Simulations (IS) thanks to eye tracking data. Our goal is to use user models that can trigger adaptive support for students who do not learn well with ISs, caused by the often unstructured and open-ended nature of these environments.