Character Controllers using Motion VAEs

ACM Transactions on Graphics (SIGGRAPH 2020)
HUNG YU LING, University of British Columbia FABIO ZINNO, Electronic Arts Vancouver GEORGE CHENG, Electronic Arts Vancouver MICHIEL VAN DE PANNE, University of British Columbia

Representative Image


A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.

Paper PDF (11MB)



This Motion VAE demo is running in the browser (requires WebGL) using ONNX.js and three.js. Please refer to the paper and video for other tasks and more detail.

  • Left-drag to rotate, right-drag to pan, and scroll to zoom.
  • Place target: Ctrl-click / Heading direction: Ctrl-click or arrow keys


  author    = {Hung Yu Ling and Fabio Zinno and George Cheng and Michiel van de Panne},
  title     = {Character Controllers Using Motion VAEs},
  booktitle = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)},
  publisher = {ACM},
  volume    = {39},
  number    = {4},
  year      = {2020}
Last Updated: 5/16/2020, 11:23:43 AM