We introduce a method for learning low-dimensional linear feedback strategies for the control of physics-based animated characters. Once learned, these allow simulated characters to respond to changes in the environment and changes in goals. The approach is based on policy search in the space of reduced-order linear output feedback matrices. We show that these can be used to replace or further reduce manually-designed state and action abstractions. The approach is sufficiently general to allow for the development of unconventional feedback loops, such as feedback based on ground reaction forces to achieve robust in-place balancing and robust walking. Results are demonstrated for a mix of 2D and 3D systems, including tilting-platform balancing, walking, running, rolling, targeted kicks, and several types of ball-hitting tasks.
If you have any questions or comments regarding this page please send mail to firstname.lastname@example.org.