Abstract

We are entering an era in human-computer interaction where new display form factors, including large displays, promise to efficiently support an entire class of tasks that were not properly supported by traditional desktop computing interfaces. We develop a "body-centric" model of interaction appropriate for use with very large wall displays. We draw on knowledge of how the brain perceives and operates in the physical world, including the concepts of proprioception, interaction spaces, and social conventions, to drive the development of novel interaction techniques. The techniques we develop include an approach for embodying the user as a virtual shadow on the display, which is motivated by physical shadows and their affordances. Other techniques include methods for selecting and manipulating virtual tools, data, and numerical values by enlisting different parts of the user's body, methods for easing multi-user collaboration by exploiting social norms, and methods for mid-air text input. We then present a body-centric architecture for supporting the implementation of interaction techniques such as the ones we designed. The architecture maintains a computational geometric model of the entire scene, including users, displays, and any relevant physical objects, which a developer can query in order to develop novel interaction techniques or applications. Finally, we investigate aspects of low-level human performance relevant to a body-centric model. We conclude that traditional models of performance, particularly Fitts' law, are inadequate when applied to physical pointing on large displays where control-display gain can vary widely, and we show that an approach due to Welford is more suitable. Our investigations provide a foundation for a comprehensive body-centric model of interaction with large wall displays that will enable a number of future research directions.

Files

[PDF]
UBC Archive