Adaptive teaching in heterogeneous agents: Balancing surprise in sparse reward scenarios

Emma Clark, Kanghyun Ryu, Negar Mehr
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1489-1501, 2024.

Abstract

Learning from Demonstration (LfD) can be an efficient way to train systems with analogous agents by enabling “Student” agents to learn from the demonstrations of the most experienced “Teacher” agent, instead of training their policy in parallel. However, when there are discrepancies in agent capabilities, such as divergent actuator power or joint angle constraints, naively replicating demonstrations that are out of bounds for the Student’s capability can limit efficient learning. We present a Teacher-Student learning framework specifically tailored to address the challenge of heterogeneity between the Teacher and Student agents. Our framework is based on the concept of “surprise”, inspired by its application in exploration incentivization in sparse-reward environments. Surprise is repurposed to enable the Teacher to detect and adapt to differences between itself and the Student. By focusing on maximizing its surprise in response to the environment while concurrently minimizing the Student’s surprise in response to the demonstrations, the Teacher agent can effectively tailor its demonstrations to the Student’s specific capabilities and constraints. We validate our method by demonstrating improvements in the Student’s learning in control tasks within sparse-reward environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-clark24a, title = {Adaptive teaching in heterogeneous agents: {B}alancing surprise in sparse reward scenarios}, author = {Clark, Emma and Ryu, Kanghyun and Mehr, Negar}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {1489--1501}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/clark24a/clark24a.pdf}, url = {https://proceedings.mlr.press/v242/clark24a.html}, abstract = {Learning from Demonstration (LfD) can be an efficient way to train systems with analogous agents by enabling “Student” agents to learn from the demonstrations of the most experienced “Teacher” agent, instead of training their policy in parallel. However, when there are discrepancies in agent capabilities, such as divergent actuator power or joint angle constraints, naively replicating demonstrations that are out of bounds for the Student’s capability can limit efficient learning. We present a Teacher-Student learning framework specifically tailored to address the challenge of heterogeneity between the Teacher and Student agents. Our framework is based on the concept of “surprise”, inspired by its application in exploration incentivization in sparse-reward environments. Surprise is repurposed to enable the Teacher to detect and adapt to differences between itself and the Student. By focusing on maximizing its surprise in response to the environment while concurrently minimizing the Student’s surprise in response to the demonstrations, the Teacher agent can effectively tailor its demonstrations to the Student’s specific capabilities and constraints. We validate our method by demonstrating improvements in the Student’s learning in control tasks within sparse-reward environments.} }
Endnote
%0 Conference Paper %T Adaptive teaching in heterogeneous agents: Balancing surprise in sparse reward scenarios %A Emma Clark %A Kanghyun Ryu %A Negar Mehr %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-clark24a %I PMLR %P 1489--1501 %U https://proceedings.mlr.press/v242/clark24a.html %V 242 %X Learning from Demonstration (LfD) can be an efficient way to train systems with analogous agents by enabling “Student” agents to learn from the demonstrations of the most experienced “Teacher” agent, instead of training their policy in parallel. However, when there are discrepancies in agent capabilities, such as divergent actuator power or joint angle constraints, naively replicating demonstrations that are out of bounds for the Student’s capability can limit efficient learning. We present a Teacher-Student learning framework specifically tailored to address the challenge of heterogeneity between the Teacher and Student agents. Our framework is based on the concept of “surprise”, inspired by its application in exploration incentivization in sparse-reward environments. Surprise is repurposed to enable the Teacher to detect and adapt to differences between itself and the Student. By focusing on maximizing its surprise in response to the environment while concurrently minimizing the Student’s surprise in response to the demonstrations, the Teacher agent can effectively tailor its demonstrations to the Student’s specific capabilities and constraints. We validate our method by demonstrating improvements in the Student’s learning in control tasks within sparse-reward environments.
APA
Clark, E., Ryu, K. & Mehr, N.. (2024). Adaptive teaching in heterogeneous agents: Balancing surprise in sparse reward scenarios. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:1489-1501 Available from https://proceedings.mlr.press/v242/clark24a.html.

Related Material