Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits

Wenshuo Guo, Kumar Krishna Agrawal, Aditya Grover, Vidya K. Muthukumar, Ashwin Pananjady
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6357-6386, 2022.

Abstract

We introduce the “inverse bandit” problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, we propose to leverage the demonstrator’s behavior en route to optimality, and in particular, the exploration phase, for reward estimation. We begin by establishing a general information-theoretic lower bound under this paradigm that applies to any demonstrator algorithm, which characterizes a fundamental tradeoff between reward estimation and the amount of exploration of the demonstrator. Then, we develop simple and efficient reward estimators for upper-confidence-based demonstrator algorithms that attain the optimal tradeoff, showing in particular that consistent reward estimation—free of identifiability issues—is possible under our paradigm. Extensive simulations on both synthetic and semi-synthetic data corroborate our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-guo22b, title = { Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits }, author = {Guo, Wenshuo and Krishna Agrawal, Kumar and Grover, Aditya and Muthukumar, Vidya K. and Pananjady, Ashwin}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6357--6386}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/guo22b/guo22b.pdf}, url = {https://proceedings.mlr.press/v151/guo22b.html}, abstract = { We introduce the “inverse bandit” problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, we propose to leverage the demonstrator’s behavior en route to optimality, and in particular, the exploration phase, for reward estimation. We begin by establishing a general information-theoretic lower bound under this paradigm that applies to any demonstrator algorithm, which characterizes a fundamental tradeoff between reward estimation and the amount of exploration of the demonstrator. Then, we develop simple and efficient reward estimators for upper-confidence-based demonstrator algorithms that attain the optimal tradeoff, showing in particular that consistent reward estimation—free of identifiability issues—is possible under our paradigm. Extensive simulations on both synthetic and semi-synthetic data corroborate our theoretical results. } }
Endnote
%0 Conference Paper %T Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits %A Wenshuo Guo %A Kumar Krishna Agrawal %A Aditya Grover %A Vidya K. Muthukumar %A Ashwin Pananjady %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-guo22b %I PMLR %P 6357--6386 %U https://proceedings.mlr.press/v151/guo22b.html %V 151 %X We introduce the “inverse bandit” problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, we propose to leverage the demonstrator’s behavior en route to optimality, and in particular, the exploration phase, for reward estimation. We begin by establishing a general information-theoretic lower bound under this paradigm that applies to any demonstrator algorithm, which characterizes a fundamental tradeoff between reward estimation and the amount of exploration of the demonstrator. Then, we develop simple and efficient reward estimators for upper-confidence-based demonstrator algorithms that attain the optimal tradeoff, showing in particular that consistent reward estimation—free of identifiability issues—is possible under our paradigm. Extensive simulations on both synthetic and semi-synthetic data corroborate our theoretical results.
APA
Guo, W., Krishna Agrawal, K., Grover, A., Muthukumar, V.K. & Pananjady, A.. (2022). Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6357-6386 Available from https://proceedings.mlr.press/v151/guo22b.html.

Related Material