Probabilistic Submodular Maximization in Sub-Linear Time

Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3241-3250, 2017.

Abstract

In this paper, we consider optimizing submodular functions that are drawn from some unknown distribution. This setting arises, e.g., in recommender systems, where the utility of a subset of items may depend on a user-specific submodular utility function. In modern applications, the ground set of items is often so large that even the widely used (lazy) greedy algorithm is not efficient enough. As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e.g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set. We cast this problem as a two-stage submodular maximization and develop a novel efficient algorithm for this problem which offers $1/2(1 - 1/e^2)$ approximation ratio for general monotone submodular functions and general matroid constraints. We demonstrate the effectiveness of our approach on several real-world applications where running the maximization problem on the reduced ground set leads to two orders of magnitude speed-up while incurring almost no loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-stan17a, title = {Probabilistic Submodular Maximization in Sub-Linear Time}, author = {Serban Stan and Morteza Zadimoghaddam and Andreas Krause and Amin Karbasi}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3241--3250}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/stan17a/stan17a.pdf}, url = {https://proceedings.mlr.press/v70/stan17a.html}, abstract = {In this paper, we consider optimizing submodular functions that are drawn from some unknown distribution. This setting arises, e.g., in recommender systems, where the utility of a subset of items may depend on a user-specific submodular utility function. In modern applications, the ground set of items is often so large that even the widely used (lazy) greedy algorithm is not efficient enough. As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e.g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set. We cast this problem as a two-stage submodular maximization and develop a novel efficient algorithm for this problem which offers $1/2(1 - 1/e^2)$ approximation ratio for general monotone submodular functions and general matroid constraints. We demonstrate the effectiveness of our approach on several real-world applications where running the maximization problem on the reduced ground set leads to two orders of magnitude speed-up while incurring almost no loss.} }
Endnote
%0 Conference Paper %T Probabilistic Submodular Maximization in Sub-Linear Time %A Serban Stan %A Morteza Zadimoghaddam %A Andreas Krause %A Amin Karbasi %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-stan17a %I PMLR %P 3241--3250 %U https://proceedings.mlr.press/v70/stan17a.html %V 70 %X In this paper, we consider optimizing submodular functions that are drawn from some unknown distribution. This setting arises, e.g., in recommender systems, where the utility of a subset of items may depend on a user-specific submodular utility function. In modern applications, the ground set of items is often so large that even the widely used (lazy) greedy algorithm is not efficient enough. As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e.g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set. We cast this problem as a two-stage submodular maximization and develop a novel efficient algorithm for this problem which offers $1/2(1 - 1/e^2)$ approximation ratio for general monotone submodular functions and general matroid constraints. We demonstrate the effectiveness of our approach on several real-world applications where running the maximization problem on the reduced ground set leads to two orders of magnitude speed-up while incurring almost no loss.
APA
Stan, S., Zadimoghaddam, M., Krause, A. & Karbasi, A.. (2017). Probabilistic Submodular Maximization in Sub-Linear Time. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3241-3250 Available from https://proceedings.mlr.press/v70/stan17a.html.

Related Material