Using Perceptual Data for More Efficient Reinforcement Learning
By Bethany Leffler
In real-world domains, a reinforcement-learning agent has to learn a great deal from experience. Therefore, it must be sample-size efficient. To do so, it must balance the amount of exploration that is needed to properly model the environment with the need to use the information that it has already obtained to complete its original task. In robot domains, the exploration process is especially costly in both time and energy. Therefore, it is important to make the best possible use of the robot's limited opportunities for exploration without degrading the robot's performance. This presentation will discuss a specialization of the standard Markov Decision Process (MDP) framework that allows for easier transfer of experience between similar states and introduce several algorithms that use this new framework along with perceptual data to perform more efficient exploration in robot-navigation problems.
Visit the LCI Forum page