WUAV 2013 - 1st International Workshop on User-Adaptive Visualization

In conjunction with the 21st Conference on User Modeling, Adaptation and Personalization (UMAP), Rome, Italy, June 10-14, 2013.


June 4, 2013: Program now online
Feb 1, 2013: UAV reference page online
Jan 28, 2013: First Call for Papers
Jan 28, 2013: WUAV website now online

Important dates

Apr 04, 2013: Submission Deadline
May 01, 2013: Notification to Authors
May 09, 2013: Camera-ready Due
June 14, 2013: Workshop day


The 1st International Workshop on User-Adaptive Visualization will take place in Rome, Italy on June 14, 2013, in conjunction with UMAP 2013, the 21st Conference on User Modeling, Adaptation and Personalization.

Call for Papers

Recent advances in visualization research have shown that individual user needs, abilities, and preferences can have a significant impact on visualization effectiveness. It is therefore important to investigate visualization techniques that support each user by adapting to such individual differences, both in terms of stable (long-term) user traits (e.g. perceptual speed, personality), as well as transitory (short-term) states (e.g. attention, current task, evolving expertise). To this end, we invite participation in the 1st International Workshop on User-Adaptive Visualization, held in conjunction with UMAP 2013. The goal of this workshop is to build the foundations for a new research community specifically focused on this novel and promising topic of user-adaptive visualizations. In particular, it aims to bring together researchers from the areas of visualization (including InfoVis, SciVis and Visual Analytics), UMAP, HCI, and cognitive/perceptual psychology, in order to share and discuss multidisciplinary knowledge relating to user-adaptive visualization research. For a sample list of previous work on User-Adaptive Visualization, please visit the complementary reference page.

This full-day workshop will focus on the following four questions:
  1. What individual user differences should be considered for adaptation?(+)
    The first challenge for any user-adaptive system lies in understanding which individual user differences should be considered for adaptation. In particular, such differences can consist of stable, long-term user traits comprising cognitive abilities or styles (e.g. perceptual speed, working memory), as well as transitory, short-term differences in mental states (e.g. expertise, attention, motivation) or interaction context (e.g. current task). Addressing this challenge involves two steps. The first is identifying which individual user differences influence interaction performance and satisfaction considerably enough to justify adaptation. In visualization, there are already results on the impact of a number of such user characteristics on user performance. For example, cognitive abilities (e.g. perceptual speed) or personality traits (e.g. locus of control) have been shown to significantly influence users’ completion time and/or accuracy when solving simple visualization tasks. However, more work needs to be done in terms of investigating the impact of additional user characteristics and the generalizability of such results to a wider range of visualizations. Similarly, more research needs to be conducted in visual analytics in terms of combining automated analysis methods with a user’s background knowledge.
    The second step relates to investigating if and how accurately the relevant user characteristics can be captured. Techniques can range from tracking interaction data such as click-streams or task performance, to processing more sophisticated behavioral data such as eye gaze or physiological signals. Initial research in visualization has shown how interaction data such as selection latency or user eye gaze can be used to infer different user needs and abilities. However, such initial results need to be broadened to a wider variety of user differences, visualizations, and data capturing techniques.
  2. When is adaptive support appropriate and/or necessary?(+)
    The second question regarding user-adaptive visualization systems relates to understanding when it is appropriate and/or necessary to provide adaptive support to the user. Addressing this challenge involves formalizing adaptation decisions that can identify those situations in which the benefits of providing adaptive interventions outweigh their cost (e.g., disrupting the interaction). While there has been extensive work in investigating how to detect when a user needs help in fields such as Intelligent Tutoring Systems or Adaptive Games , visualization research has so far been limited regarding this issue. To our knowledge, the research by Gotz et al. is so far the only work that actively monitors real-time user behavior (through mouse clicks) in order to infer such needs for intervention. More research is needed in formalizing what situations need/would benefit from adaptive interventions during a user’s interaction with visualizations. Additional work is necessary in studying how these situations can be detected in real-time, e.g. could they be detected solely based on eye tracking data when the visualization is not interactive? Finally, similar to what is being done in other fields, it is also important to identify how adaptive interventions may negatively impact other aspects of the interaction with visualizations.
  3. How to adapt, i.e. what techniques to use and at what level of intrusiveness?(+)
    Adaptive systems need to decide how adaptation should be provided. General methods to provide adaptation include, among others: adapting interface elements (i.e. highlight specific interface components), having the interface take over a task for the user (i.e. perform a required computation), generate explicit help messages to help the user through a task, recommend alternatives (e.g. suggesting a different visualization to the user that is more suitable to her abilities). Moreover, each method can be delivered via fully adaptive or mixed-initiative approaches, as well as through a variety of designs (e.g. highlighting can be performed by changing color, size, by adding animation, etc.). New studies need to be conducted to gain an in-depth understanding on which types of interventions are most appropriate for particular visualizations/users, as well as what level of intrusiveness/subtleness is necessary and appropriate.
  4. How to evaluate?(+)
    Lastly, the evaluation of user-adaptive systems is generally considered a difficult task, because it requires isolating the effects on the interaction of the adaptive components from the impact of other design elements. In addition, an adaptive system needs to “learn” about its user before it can precisely assess what the user needs. This further complicates system evaluation, requiring longitudinal studies that are often hard to implement. While some of this existing work in evaluating adaptivity may be leveraged for adaptive visualization, the extent to which it can be applied to this novel field remains open to investigation. Related research questions we aim to address in the workshop include which evaluation methodologies to use, as well as which metrics should be adopted for the evaluation.
We encourage submissions from diverse backgrounds, including visualization (encompassing InfoVis, SciVis and Visual Analytics), UMAP, HCI, and cognitive/perceptual psychology. We invite papers describing more developed ideas and methods, submitted via long papers (max. 6 pages), as well as novel work-in-progress, submitted via short papers (max. 3 pages). For full details on the submission format and procedure, please refer to the submission page. Papers will be selected based on originality, quality, and ability to promote discussion. Accepted papers will be included in the workshop proceedings and published on the CEUR proceedings site. At least one author of each accepted paper must attend the workshop.

For further questions, please contact a member of the organizing committee.