Feasible Action Search for Bandit Linear Programs via Thompson Sampling

Aditya Gangrade, Aldo Pacchiano, Clayton Scott, Venkatesh Saligrama
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18228-18256, 2025.

Abstract

We study the ’feasible action search’ (FAS) problem for linear bandits, wherein a learner attempts to discover a feasible point for a set of linear constraints $\Phi_* a \ge 0,$ without knowledge of the matrix $\Phi_* \in \mathbb{R}^{m \times d}$. A FAS learner selects a sequence of actions $a_t,$ and uses observations of the form $\Phi_* a_t + \mathrm{noise}$ to either find a point with nearly optimal ’safety margin’, or detect that the constraints are infeasible, where the safety margin of an action measures its (signed) distance from the constraint boundary. While of interest in its own right, the FAS problem also directly addresses a key deficiency in the extant theory of ’safe linear bandits’ (SLBs), by discovering a safe initialisation for low-regret SLB methods. We propose and analyse a novel efficient FAS-learner. Our method, FAST, is based on Thompson Sampling. It applies a coupled random perturbation to an estimate of $\Phi_*,$ and plays a maximin point of a game induced by this perturbed matrix. We prove that FAST stops in $\tilde{O}(d^3/\varepsilon^2 M_*^2)$ steps, and incurs $\tilde{O}(d^3/|M_*|)$ safety costs, to either correctly detect infeasibility, or output a point that is at least $(1-\varepsilon) M_*$-safe, where $M_*$ is the optimal safety margin of $\Phi_*$. Further, instantiating prior SLB methods with the output of FAS yields the first SLB methods that incur $\tilde{O}(\sqrt{d^3 T/M_*^2})$ regret and $O(1)$ risk without a priori knowledge of a safe action. The main technical novelty lies in the extension of Thompson Sampling to this multiobjective setting, for which we both propose a coupled noise design, and provide an analysis that avoids convexity considerations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gangrade25a, title = {Feasible Action Search for Bandit Linear Programs via Thompson Sampling}, author = {Gangrade, Aditya and Pacchiano, Aldo and Scott, Clayton and Saligrama, Venkatesh}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18228--18256}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gangrade25a/gangrade25a.pdf}, url = {https://proceedings.mlr.press/v267/gangrade25a.html}, abstract = {We study the ’feasible action search’ (FAS) problem for linear bandits, wherein a learner attempts to discover a feasible point for a set of linear constraints $\Phi_* a \ge 0,$ without knowledge of the matrix $\Phi_* \in \mathbb{R}^{m \times d}$. A FAS learner selects a sequence of actions $a_t,$ and uses observations of the form $\Phi_* a_t + \mathrm{noise}$ to either find a point with nearly optimal ’safety margin’, or detect that the constraints are infeasible, where the safety margin of an action measures its (signed) distance from the constraint boundary. While of interest in its own right, the FAS problem also directly addresses a key deficiency in the extant theory of ’safe linear bandits’ (SLBs), by discovering a safe initialisation for low-regret SLB methods. We propose and analyse a novel efficient FAS-learner. Our method, FAST, is based on Thompson Sampling. It applies a coupled random perturbation to an estimate of $\Phi_*,$ and plays a maximin point of a game induced by this perturbed matrix. We prove that FAST stops in $\tilde{O}(d^3/\varepsilon^2 M_*^2)$ steps, and incurs $\tilde{O}(d^3/|M_*|)$ safety costs, to either correctly detect infeasibility, or output a point that is at least $(1-\varepsilon) M_*$-safe, where $M_*$ is the optimal safety margin of $\Phi_*$. Further, instantiating prior SLB methods with the output of FAS yields the first SLB methods that incur $\tilde{O}(\sqrt{d^3 T/M_*^2})$ regret and $O(1)$ risk without a priori knowledge of a safe action. The main technical novelty lies in the extension of Thompson Sampling to this multiobjective setting, for which we both propose a coupled noise design, and provide an analysis that avoids convexity considerations.} }
Endnote
%0 Conference Paper %T Feasible Action Search for Bandit Linear Programs via Thompson Sampling %A Aditya Gangrade %A Aldo Pacchiano %A Clayton Scott %A Venkatesh Saligrama %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gangrade25a %I PMLR %P 18228--18256 %U https://proceedings.mlr.press/v267/gangrade25a.html %V 267 %X We study the ’feasible action search’ (FAS) problem for linear bandits, wherein a learner attempts to discover a feasible point for a set of linear constraints $\Phi_* a \ge 0,$ without knowledge of the matrix $\Phi_* \in \mathbb{R}^{m \times d}$. A FAS learner selects a sequence of actions $a_t,$ and uses observations of the form $\Phi_* a_t + \mathrm{noise}$ to either find a point with nearly optimal ’safety margin’, or detect that the constraints are infeasible, where the safety margin of an action measures its (signed) distance from the constraint boundary. While of interest in its own right, the FAS problem also directly addresses a key deficiency in the extant theory of ’safe linear bandits’ (SLBs), by discovering a safe initialisation for low-regret SLB methods. We propose and analyse a novel efficient FAS-learner. Our method, FAST, is based on Thompson Sampling. It applies a coupled random perturbation to an estimate of $\Phi_*,$ and plays a maximin point of a game induced by this perturbed matrix. We prove that FAST stops in $\tilde{O}(d^3/\varepsilon^2 M_*^2)$ steps, and incurs $\tilde{O}(d^3/|M_*|)$ safety costs, to either correctly detect infeasibility, or output a point that is at least $(1-\varepsilon) M_*$-safe, where $M_*$ is the optimal safety margin of $\Phi_*$. Further, instantiating prior SLB methods with the output of FAS yields the first SLB methods that incur $\tilde{O}(\sqrt{d^3 T/M_*^2})$ regret and $O(1)$ risk without a priori knowledge of a safe action. The main technical novelty lies in the extension of Thompson Sampling to this multiobjective setting, for which we both propose a coupled noise design, and provide an analysis that avoids convexity considerations.
APA
Gangrade, A., Pacchiano, A., Scott, C. & Saligrama, V.. (2025). Feasible Action Search for Bandit Linear Programs via Thompson Sampling. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18228-18256 Available from https://proceedings.mlr.press/v267/gangrade25a.html.

Related Material