Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback

Asaf Cassel, Haipeng Luo, Aviv Rosenberg, Dmitry Sotnikov
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:5757-5791, 2024.

Abstract

In many real-world applications, it is hard to provide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of rewards at the end of an episode instead of each reward individually. Prior work studied RL-ABF only in tabular settings, where the number of states is assumed to be small. In this paper, we extend ABF to linear function approximation and develop two efficient algorithms with near-optimal regret guarantees: a value-based optimistic algorithm built on a new randomization technique with a Q-functions ensemble, and a policy optimization algorithm that uses a novel hedging scheme over the ensemble.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-cassel24a, title = {Near-Optimal Regret in Linear {MDP}s with Aggregate Bandit Feedback}, author = {Cassel, Asaf and Luo, Haipeng and Rosenberg, Aviv and Sotnikov, Dmitry}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {5757--5791}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/cassel24a/cassel24a.pdf}, url = {https://proceedings.mlr.press/v235/cassel24a.html}, abstract = {In many real-world applications, it is hard to provide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of rewards at the end of an episode instead of each reward individually. Prior work studied RL-ABF only in tabular settings, where the number of states is assumed to be small. In this paper, we extend ABF to linear function approximation and develop two efficient algorithms with near-optimal regret guarantees: a value-based optimistic algorithm built on a new randomization technique with a Q-functions ensemble, and a policy optimization algorithm that uses a novel hedging scheme over the ensemble.} }
Endnote
%0 Conference Paper %T Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback %A Asaf Cassel %A Haipeng Luo %A Aviv Rosenberg %A Dmitry Sotnikov %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-cassel24a %I PMLR %P 5757--5791 %U https://proceedings.mlr.press/v235/cassel24a.html %V 235 %X In many real-world applications, it is hard to provide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of rewards at the end of an episode instead of each reward individually. Prior work studied RL-ABF only in tabular settings, where the number of states is assumed to be small. In this paper, we extend ABF to linear function approximation and develop two efficient algorithms with near-optimal regret guarantees: a value-based optimistic algorithm built on a new randomization technique with a Q-functions ensemble, and a policy optimization algorithm that uses a novel hedging scheme over the ensemble.
APA
Cassel, A., Luo, H., Rosenberg, A. & Sotnikov, D.. (2024). Near-Optimal Regret in Linear MDPs with Aggregate Bandit Feedback. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:5757-5791 Available from https://proceedings.mlr.press/v235/cassel24a.html.

Related Material