Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

Vihang Patil, Markus Hofmarcher, Marius-Constantin Dinu, Matthias Dorfer, Patrick M Blies, Johannes Brandstetter, José Arjona-Medina, Sepp Hochreiter
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:17531-17572, 2022.

Abstract

Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at github.com/ml-jku/align-rudder.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-patil22a, title = {Align-{RUDDER}: Learning From Few Demonstrations by Reward Redistribution}, author = {Patil, Vihang and Hofmarcher, Markus and Dinu, Marius-Constantin and Dorfer, Matthias and Blies, Patrick M and Brandstetter, Johannes and Arjona-Medina, Jos{\'e} and Hochreiter, Sepp}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {17531--17572}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/patil22a/patil22a.pdf}, url = {https://proceedings.mlr.press/v162/patil22a.html}, abstract = {Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at github.com/ml-jku/align-rudder.} }
Endnote
%0 Conference Paper %T Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution %A Vihang Patil %A Markus Hofmarcher %A Marius-Constantin Dinu %A Matthias Dorfer %A Patrick M Blies %A Johannes Brandstetter %A José Arjona-Medina %A Sepp Hochreiter %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-patil22a %I PMLR %P 17531--17572 %U https://proceedings.mlr.press/v162/patil22a.html %V 162 %X Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at github.com/ml-jku/align-rudder.
APA
Patil, V., Hofmarcher, M., Dinu, M., Dorfer, M., Blies, P.M., Brandstetter, J., Arjona-Medina, J. & Hochreiter, S.. (2022). Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:17531-17572 Available from https://proceedings.mlr.press/v162/patil22a.html.

Related Material