Relative Entropy Policy Search

Authors

  • Jan Peters Max Planck Institute for Biological Cybernetics
  • Katharina Mulling Max Planck Institute for Biological Cybernetics
  • Yasemin Altun Max Planck Institute for Biological Cybernetics

DOI:

https://doi.org/10.1609/aaai.v24i1.7727

Keywords:

reinforcement learning, policy search, motor primitive selection

Abstract

Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients, many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It can be shown to work well on typical reinforcement learning benchmark problems.

Downloads

Published

2010-07-05

How to Cite

Peters, J., Mulling, K., & Altun, Y. (2010). Relative Entropy Policy Search. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 1607-1612. https://doi.org/10.1609/aaai.v24i1.7727