Efficient Interactive Maximization of BP and Weakly Submodular Objectives

Adhyyan Narang, Omid Sadeghi, Lillian Ratliff, Maryam Fazel, Jeff Bilmes
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:2670-2699, 2024.

Abstract

In the context of online interactive machine learning with combinatorial objectives, we extend purely submodular prior work to more general non-submodular objectives. This includes: (1) those that are additively decomposable into a sum of two terms (a monotone submodular and monotone supermodular term, known as a BP decomposition); and (2) those that are only weakly submodular. In both cases, this allows representing not only competitive (submodular) but also complementary (supermodular) relationships between objects, enhancing this setting to a broader range of applications (e.g., movie recommendations, medical treatments, etc.) where this is beneficial. In the two-term case, moreover, we study not only the more typical monolithic feedback approach but also a novel framework where feedback is available separately for each term. With real-world practicality and scalability in mind, we integrate \Nystrom{} sketching techniques to significantly improve the computational complexity, including for the purely submodular case. In the Gaussian process contextual bandits setting, we show sub-linear theoretical regret bounds in all cases. We also empirically show good applicability to recommendation systems and data subset selection.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-narang24a, title = {Efficient Interactive Maximization of BP and Weakly Submodular Objectives}, author = {Narang, Adhyyan and Sadeghi, Omid and Ratliff, Lillian and Fazel, Maryam and Bilmes, Jeff}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {2670--2699}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/narang24a/narang24a.pdf}, url = {https://proceedings.mlr.press/v244/narang24a.html}, abstract = {In the context of online interactive machine learning with combinatorial objectives, we extend purely submodular prior work to more general non-submodular objectives. This includes: (1) those that are additively decomposable into a sum of two terms (a monotone submodular and monotone supermodular term, known as a BP decomposition); and (2) those that are only weakly submodular. In both cases, this allows representing not only competitive (submodular) but also complementary (supermodular) relationships between objects, enhancing this setting to a broader range of applications (e.g., movie recommendations, medical treatments, etc.) where this is beneficial. In the two-term case, moreover, we study not only the more typical monolithic feedback approach but also a novel framework where feedback is available separately for each term. With real-world practicality and scalability in mind, we integrate \Nystrom{} sketching techniques to significantly improve the computational complexity, including for the purely submodular case. In the Gaussian process contextual bandits setting, we show sub-linear theoretical regret bounds in all cases. We also empirically show good applicability to recommendation systems and data subset selection.} }
Endnote
%0 Conference Paper %T Efficient Interactive Maximization of BP and Weakly Submodular Objectives %A Adhyyan Narang %A Omid Sadeghi %A Lillian Ratliff %A Maryam Fazel %A Jeff Bilmes %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-narang24a %I PMLR %P 2670--2699 %U https://proceedings.mlr.press/v244/narang24a.html %V 244 %X In the context of online interactive machine learning with combinatorial objectives, we extend purely submodular prior work to more general non-submodular objectives. This includes: (1) those that are additively decomposable into a sum of two terms (a monotone submodular and monotone supermodular term, known as a BP decomposition); and (2) those that are only weakly submodular. In both cases, this allows representing not only competitive (submodular) but also complementary (supermodular) relationships between objects, enhancing this setting to a broader range of applications (e.g., movie recommendations, medical treatments, etc.) where this is beneficial. In the two-term case, moreover, we study not only the more typical monolithic feedback approach but also a novel framework where feedback is available separately for each term. With real-world practicality and scalability in mind, we integrate \Nystrom{} sketching techniques to significantly improve the computational complexity, including for the purely submodular case. In the Gaussian process contextual bandits setting, we show sub-linear theoretical regret bounds in all cases. We also empirically show good applicability to recommendation systems and data subset selection.
APA
Narang, A., Sadeghi, O., Ratliff, L., Fazel, M. & Bilmes, J.. (2024). Efficient Interactive Maximization of BP and Weakly Submodular Objectives. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:2670-2699 Available from https://proceedings.mlr.press/v244/narang24a.html.

Related Material