Eyecatch: Simulating Visuomotor Coordination for Object Interception

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2012)

Sang Hoon Yeo   Martin Lesmana   Debanga R. Neog   Dinesh K. Pai

Sensorimotor Systems Laboratory, University of British Columbia


Catching a thrown ball. The movement depends on visual estimates of the ball’s motion, which trigger shared motor programs for eye, head, arm, and torso movement. The gaze sets the goal for the hand. Initially the movements are reactive, but as visual estimates improve predictive movements are generated to the final catching position.

Abstract We present a novel framework for animating human characters performing fast visually guided tasks, such as catching a ball. The main idea is to consider the coordinated dynamics of sensing and movement. Based on experimental evidence about such behaviors, we propose a generative model that constructs interception behavior online, using discrete submovements directed by uncertain visual estimates of target movement. An important aspect of this framework is that eye movements are included as well, and play a central role in coordinating movements of the head, hand, and body. We show that this framework efficiently generates plausible movements and generalizes well to novel scenarios.
Paper
PDF (6.8M)
Video
MOV (12.6M)
Bibtex @ARTICLE{Yeo:2012,
   author = {Sang Hoon Yeo and Martin Lesmana and Debanga R. Neog and Dinesh K. Pai},
   title = {Eyecatch: Simulating Visuomotor Coordination for Object Interception},
   journal = {ACM Trans. Graph. (Proc. SIGGRAPH)},
   year = {2012},
   volume = {31},
   number = {4},
}
Funding
  • Canada Research Chairs Program
  • Peter Wall Institute for Advanced Studies
  • NSERC
  • Institute for Computing Information and Cognitive Systems
  • Canada Foundation for Innovation
  • Human Frontier Science Program