Learning Soccer Juggling Skills with Layer-wise Mixture-of-Experts

In Proceedings of SIGGRAPH 2022

ZHAOMING XIE, University of British Columbia and Electronic Arts, Canada SEBASTIAN STARKE, University of Edinburgh and Electronic Arts, United Kingdom HUNG YU LING, University of British Columbia, Canada MICHIEL VAN DE PANNE, University of British Columbia, Canada


Paper: PDF (23MB) / Code: GitHub

Learning physics-based character controllers that can successfully integrate diverse motor skills using a single policy remains a challenging problem. We present a system to learn control policies for multiple soccer juggling skills, based on deep reinforcement learning. We introduce a task-description framework for these skills which facilitates the specification of individual soccer juggling tasks and the transitions between them. Desired motions can be authored using interpolation of crude reference poses or based on motion capture data. We show that a layer-wise mixture-of-experts architecture offers significant benefits. During training, transitions are chosen with the help of an adaptive random walk, in support of efficient learning. We demonstrate foot, head, knee, and chest juggles, foot stalls, the challenging around-the-world trick, as well as robust transitions. Our work provides a significant step towards realizing physics-based characters capable of the precision-based motor skills of human athletes.


We introduce a method for learning physics-based soccer juggling skills via deep reinforcement learning. Innovations include a layer-wise mixture-of-experts neural network policy for efficient learning, a control graph for authoring the many skills and their transitions, and an adaptive random walk curriculum on the control graph.



If you use this code for your research, please cite our paper. The BibTeX entry will be posted after paper becomes available on the ACM website.

      author = {Zhaoming Xie and Sebastian Starke and Hung Yu Ling and Michiel van de Panne},
      title = {Learning Soccer Juggling Skills with Layer-wise Mixture-of-Experts},
      year = {2022}