Learning Locomotion Skills Using DeepRL: Does the Choice of Action Space Matter?

Xue Bin Peng     Michiel van de Panne
University of British Columbia
 

           
Abstract

The use of deep reinforcement learning allows for high-dimensional state descriptors, but little is known about how the choice of action representation impacts the learning difficulty and the resulting performance. We compare the impact of four different action parameterizations (torques, muscle-activations, target joint angles, and target joint-angle velocities) in terms of learning time, policy robustness, motion quality, and policy query rates. Our results are evaluated on a gaitcycle imitation task for multiple planar articulated figures and multiple gaits. We demonstrate that the local feedback provided by higher-level action parameterizations can significantly impact the learning, robustness, and quality of the resulting policies.

Paper
Presentation at the NIPS 2016 Deep Reinforcement Learning Workshop (Dec 2016, Barcelona)
arXiv:1611.01055         PDF (ArXiv)         ICLR 2017 submission (open review)
Videos


Bibtex
@article{DBLP:journals/corr/PengP16,
  author    = {Xue Bin Peng and
               Michiel van de Panne},
  title     = {Learning Locomotion Skills Using DeepRL: Does the Choice of Action
               Space Matter?},
  journal   = {CoRR},
  volume    = {abs/1611.01055},
  year      = {2016},
  url       = {http://arxiv.org/abs/1611.01055},
  timestamp = {Thu, 01 Dec 2016 19:32:08 +0100},
  biburl    = {http://dblp.uni-trier.de/rec/bib/journals/corr/PengP16},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}
Acknowledgements
We thank Glen Berseth for his presentation of this work at the NIPS 2016 Deep Reinforcement Learning Workshop, and NSERC for funding this research via a Discovery Grant (RGPIN-2015-04843).
ipstat