Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables

Hamid Eghbal-zadeh, Florian Henkel, Gerhard Widmer
NeurIPS 2020 Workshop on Pre-registration in Machine Learning, PMLR 148:236-254, 2021.

Abstract

In Reinforcement Learning (RL), changes in the context often cause a distributional change in the observations of the environment, requiring the agent to adapt to this change. For example, when a new user interacts with a system, the system has to adapt to the needs of the user, which might differ based on the user’s characteristics that are often not observable. In this Contextual Reinforcement Learning (CRL) setting, the agent has to not only recognise and adapt to a context, but also remember previous ones. However, often in CRL the context is unknown, hence a supervised approach for learning to predict the context is not feasible. In this paper, we introduce Context-Adaptive Reinforcement Learning Agent (CARLA), that is capable of learning context variables in an unsupervised manner, and can adapt the policy to the current context. We provide a hypothesis based on the generative process that explains how the context variable relates to the states and observations of an environment. Further, we propose an experimental protocol to test and validate our hypothesis; and compare the performance of the proposed approach with other methods in a CRL environment. Finally, we provide empirical results in support of our hypothesis, demonstrating the effectiveness of CARLA in tackling CRL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v148-eghbal-zadeh21a, title = {Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables}, author = {Eghbal-zadeh, Hamid and Henkel, Florian and Widmer, Gerhard}, booktitle = {NeurIPS 2020 Workshop on Pre-registration in Machine Learning}, pages = {236--254}, year = {2021}, editor = {Bertinetto, Luca and Henriques, João F. and Albanie, Samuel and Paganini, Michela and Varol, Gül}, volume = {148}, series = {Proceedings of Machine Learning Research}, month = {11 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v148/eghbal-zadeh21a/eghbal-zadeh21a.pdf}, url = {https://proceedings.mlr.press/v148/eghbal-zadeh21a.html}, abstract = {In Reinforcement Learning (RL), changes in the context often cause a distributional change in the observations of the environment, requiring the agent to adapt to this change. For example, when a new user interacts with a system, the system has to adapt to the needs of the user, which might differ based on the user’s characteristics that are often not observable. In this Contextual Reinforcement Learning (CRL) setting, the agent has to not only recognise and adapt to a context, but also remember previous ones. However, often in CRL the context is unknown, hence a supervised approach for learning to predict the context is not feasible. In this paper, we introduce Context-Adaptive Reinforcement Learning Agent (CARLA), that is capable of learning context variables in an unsupervised manner, and can adapt the policy to the current context. We provide a hypothesis based on the generative process that explains how the context variable relates to the states and observations of an environment. Further, we propose an experimental protocol to test and validate our hypothesis; and compare the performance of the proposed approach with other methods in a CRL environment. Finally, we provide empirical results in support of our hypothesis, demonstrating the effectiveness of CARLA in tackling CRL.} }
Endnote
%0 Conference Paper %T Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables %A Hamid Eghbal-zadeh %A Florian Henkel %A Gerhard Widmer %B NeurIPS 2020 Workshop on Pre-registration in Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Luca Bertinetto %E João F. Henriques %E Samuel Albanie %E Michela Paganini %E Gül Varol %F pmlr-v148-eghbal-zadeh21a %I PMLR %P 236--254 %U https://proceedings.mlr.press/v148/eghbal-zadeh21a.html %V 148 %X In Reinforcement Learning (RL), changes in the context often cause a distributional change in the observations of the environment, requiring the agent to adapt to this change. For example, when a new user interacts with a system, the system has to adapt to the needs of the user, which might differ based on the user’s characteristics that are often not observable. In this Contextual Reinforcement Learning (CRL) setting, the agent has to not only recognise and adapt to a context, but also remember previous ones. However, often in CRL the context is unknown, hence a supervised approach for learning to predict the context is not feasible. In this paper, we introduce Context-Adaptive Reinforcement Learning Agent (CARLA), that is capable of learning context variables in an unsupervised manner, and can adapt the policy to the current context. We provide a hypothesis based on the generative process that explains how the context variable relates to the states and observations of an environment. Further, we propose an experimental protocol to test and validate our hypothesis; and compare the performance of the proposed approach with other methods in a CRL environment. Finally, we provide empirical results in support of our hypothesis, demonstrating the effectiveness of CARLA in tackling CRL.
APA
Eghbal-zadeh, H., Henkel, F. & Widmer, G.. (2021). Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables. NeurIPS 2020 Workshop on Pre-registration in Machine Learning, in Proceedings of Machine Learning Research 148:236-254 Available from https://proceedings.mlr.press/v148/eghbal-zadeh21a.html.

Related Material