Linear Bandits with Partially Observable Features

Wonyoung Kim, Sungwoo Park, Garud Iyengar, Assaf Zeevi, Min-Hwan Oh
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:30688-30719, 2025.

Abstract

We study the linear bandit problem that accounts for partially observable features. Without proper handling, unobserved features can lead to linear regret in the decision horizon $T$, as their influence on rewards is unknown. To tackle this challenge, we propose a novel theoretical framework and an algorithm with sublinear regret guarantees. The core of our algorithm consists of (i) feature augmentation, by appending basis vectors that are orthogonal to the row space of the observed features; and (ii) the introduction of a doubly robust estimator. Our approach achieves a regret bound of $\tilde{O}(\sqrt{(d + d_h)T})$, where $d$ is the dimension of the observed features and $d_h$ depends on the extent to which the unobserved feature space is contained in the observed one, thereby capturing the intrinsic difficulty of the problem. Notably, our algorithm requires no prior knowledge of the unobserved feature space, which may expand as more features become hidden. Numerical experiments confirm that our algorithm outperforms both non-contextual multi-armed bandits and linear bandit algorithms depending solely on observed features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kim25af, title = {Linear Bandits with Partially Observable Features}, author = {Kim, Wonyoung and Park, Sungwoo and Iyengar, Garud and Zeevi, Assaf and Oh, Min-Hwan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {30688--30719}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kim25af/kim25af.pdf}, url = {https://proceedings.mlr.press/v267/kim25af.html}, abstract = {We study the linear bandit problem that accounts for partially observable features. Without proper handling, unobserved features can lead to linear regret in the decision horizon $T$, as their influence on rewards is unknown. To tackle this challenge, we propose a novel theoretical framework and an algorithm with sublinear regret guarantees. The core of our algorithm consists of (i) feature augmentation, by appending basis vectors that are orthogonal to the row space of the observed features; and (ii) the introduction of a doubly robust estimator. Our approach achieves a regret bound of $\tilde{O}(\sqrt{(d + d_h)T})$, where $d$ is the dimension of the observed features and $d_h$ depends on the extent to which the unobserved feature space is contained in the observed one, thereby capturing the intrinsic difficulty of the problem. Notably, our algorithm requires no prior knowledge of the unobserved feature space, which may expand as more features become hidden. Numerical experiments confirm that our algorithm outperforms both non-contextual multi-armed bandits and linear bandit algorithms depending solely on observed features.} }
Endnote
%0 Conference Paper %T Linear Bandits with Partially Observable Features %A Wonyoung Kim %A Sungwoo Park %A Garud Iyengar %A Assaf Zeevi %A Min-Hwan Oh %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kim25af %I PMLR %P 30688--30719 %U https://proceedings.mlr.press/v267/kim25af.html %V 267 %X We study the linear bandit problem that accounts for partially observable features. Without proper handling, unobserved features can lead to linear regret in the decision horizon $T$, as their influence on rewards is unknown. To tackle this challenge, we propose a novel theoretical framework and an algorithm with sublinear regret guarantees. The core of our algorithm consists of (i) feature augmentation, by appending basis vectors that are orthogonal to the row space of the observed features; and (ii) the introduction of a doubly robust estimator. Our approach achieves a regret bound of $\tilde{O}(\sqrt{(d + d_h)T})$, where $d$ is the dimension of the observed features and $d_h$ depends on the extent to which the unobserved feature space is contained in the observed one, thereby capturing the intrinsic difficulty of the problem. Notably, our algorithm requires no prior knowledge of the unobserved feature space, which may expand as more features become hidden. Numerical experiments confirm that our algorithm outperforms both non-contextual multi-armed bandits and linear bandit algorithms depending solely on observed features.
APA
Kim, W., Park, S., Iyengar, G., Zeevi, A. & Oh, M.. (2025). Linear Bandits with Partially Observable Features. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:30688-30719 Available from https://proceedings.mlr.press/v267/kim25af.html.

Related Material