Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback

Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael Jordan
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:13441-13467, 2022.

Abstract

Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings. In contrast to previous works on online unconstrained submodular minimization, we focus on a class of nonsubmodular functions with special structure, and prove regret guarantees for several variants of the online and approximate online bandit gradient descent algorithms in static and delayed scenarios. We derive bounds for the agent’s regret in the full information and bandit feedback setting, even if the delay between choosing a decision and receiving the incurred cost is unbounded. Key to our approach is the notion of $(\alpha, \beta)$-regret and the extension of the generic convex relaxation model from \citet{El-2020-Optimal}, the analysis of which is of independent interest. We conduct and showcase several simulation studies to demonstrate the efficacy of our algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-lin22g, title = {Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback}, author = {Lin, Tianyi and Pacchiano, Aldo and Yu, Yaodong and Jordan, Michael}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {13441--13467}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/lin22g/lin22g.pdf}, url = {https://proceedings.mlr.press/v162/lin22g.html}, abstract = {Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings. In contrast to previous works on online unconstrained submodular minimization, we focus on a class of nonsubmodular functions with special structure, and prove regret guarantees for several variants of the online and approximate online bandit gradient descent algorithms in static and delayed scenarios. We derive bounds for the agent’s regret in the full information and bandit feedback setting, even if the delay between choosing a decision and receiving the incurred cost is unbounded. Key to our approach is the notion of $(\alpha, \beta)$-regret and the extension of the generic convex relaxation model from \citet{El-2020-Optimal}, the analysis of which is of independent interest. We conduct and showcase several simulation studies to demonstrate the efficacy of our algorithms.} }
Endnote
%0 Conference Paper %T Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback %A Tianyi Lin %A Aldo Pacchiano %A Yaodong Yu %A Michael Jordan %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-lin22g %I PMLR %P 13441--13467 %U https://proceedings.mlr.press/v162/lin22g.html %V 162 %X Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings. In contrast to previous works on online unconstrained submodular minimization, we focus on a class of nonsubmodular functions with special structure, and prove regret guarantees for several variants of the online and approximate online bandit gradient descent algorithms in static and delayed scenarios. We derive bounds for the agent’s regret in the full information and bandit feedback setting, even if the delay between choosing a decision and receiving the incurred cost is unbounded. Key to our approach is the notion of $(\alpha, \beta)$-regret and the extension of the generic convex relaxation model from \citet{El-2020-Optimal}, the analysis of which is of independent interest. We conduct and showcase several simulation studies to demonstrate the efficacy of our algorithms.
APA
Lin, T., Pacchiano, A., Yu, Y. & Jordan, M.. (2022). Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:13441-13467 Available from https://proceedings.mlr.press/v162/lin22g.html.

Related Material