Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Jiabin Lin, Shana Moothedath, Namrata Vaswani
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:30227-30251, 2024.

Abstract

We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T linear contextual bandits with dimension simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r ≪ d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. We obtain constructive provable guarantees for our estimator that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity (total number of iterations needed to achieve a certain error level). Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lin24o, title = {Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits}, author = {Lin, Jiabin and Moothedath, Shana and Vaswani, Namrata}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {30227--30251}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24o/lin24o.pdf}, url = {https://proceedings.mlr.press/v235/lin24o.html}, abstract = {We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T linear contextual bandits with dimension simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r ≪ d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. We obtain constructive provable guarantees for our estimator that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity (total number of iterations needed to achieve a certain error level). Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.} }
Endnote
%0 Conference Paper %T Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits %A Jiabin Lin %A Shana Moothedath %A Namrata Vaswani %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lin24o %I PMLR %P 30227--30251 %U https://proceedings.mlr.press/v235/lin24o.html %V 235 %X We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T linear contextual bandits with dimension simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r ≪ d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. We obtain constructive provable guarantees for our estimator that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity (total number of iterations needed to achieve a certain error level). Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.
APA
Lin, J., Moothedath, S. & Vaswani, N.. (2024). Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:30227-30251 Available from https://proceedings.mlr.press/v235/lin24o.html.

Related Material