Offline Contextual Bandits with Overparameterized Models

David Brandfonbrener, William Whitney, Rajesh Ranganath, Joan Bruna
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1049-1058, 2021.

Abstract

Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-brandfonbrener21a, title = {Offline Contextual Bandits with Overparameterized Models}, author = {Brandfonbrener, David and Whitney, William and Ranganath, Rajesh and Bruna, Joan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1049--1058}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/brandfonbrener21a/brandfonbrener21a.pdf}, url = {https://proceedings.mlr.press/v139/brandfonbrener21a.html}, abstract = {Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.} }
Endnote
%0 Conference Paper %T Offline Contextual Bandits with Overparameterized Models %A David Brandfonbrener %A William Whitney %A Rajesh Ranganath %A Joan Bruna %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-brandfonbrener21a %I PMLR %P 1049--1058 %U https://proceedings.mlr.press/v139/brandfonbrener21a.html %V 139 %X Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.
APA
Brandfonbrener, D., Whitney, W., Ranganath, R. & Bruna, J.. (2021). Offline Contextual Bandits with Overparameterized Models. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1049-1058 Available from https://proceedings.mlr.press/v139/brandfonbrener21a.html.

Related Material