Approximation Guarantees for Adaptive Sampling

[edit]

Eric Balkanski, Yaron Singer ;
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:393-402, 2018.

Abstract

In this paper we analyze an adaptive sampling approach for submodular maximization. Adaptive sampling is a technique that has recently been shown to achieve a constant factor approximation guarantee for submodular maximization under a cardinality constraint with exponentially fewer adaptive rounds than any previously studied constant factor approximation algorithm for this problem. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel and is the parallel running time of an algorithm, up to low order terms. Adaptive sampling achieves its exponential speedup at the expense of approximation. In theory, it is guaranteed to produce a solutionthat is a 1/3 approximation to the optimum. Nevertheless, experiments show that adaptive sampling techniques achieve far better values in practice.In this paper we provide theoretical justification for this phenomenon. In particular, we showthat under very mild conditions of curvature of a function, adaptive sampling techniques achieve an approximation arbitrarily close to 1/2 while maintaining their low adaptivity. Furthermore, we show that the approximation ratio approaches 1 in direct relationship to a homogeneity property of the submodular function. In addition, we conductexperiments on real data sets in which the curvature and homogeneity properties can be easilymanipulated and demonstrate the relationship between approximation and curvature, as well asthe effectiveness of adaptive sampling in practice.

Related Material