[edit]
Coresets for Multiple ℓp Regression
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:53202-53233, 2024.
Abstract
A coreset of a dataset with n examples and d features is a weighted subset of examples that is sufficient for solving downstream data analytic tasks. Nearly optimal constructions of coresets for least squares and ℓp linear regression with a single response are known in prior work. However, for multiple ℓp regression where there can be m responses, there are no known constructions with size sublinear in m. In this work, we construct coresets of size ˜O(ε−2d) for p<2 and ˜O(ε−pdp/2) for p>2 independently of m (i.e., dimension-free) that approximate the multiple ℓp regression objective at every point in the domain up to (1±ε) relative error. If we only need to preserve the minimizer subject to a subspace constraint, we improve these bounds by an ε factor for all p>1. All of our bounds are nearly tight. We give two application of our results. First, we settle the number of uniform samples needed to approximate ℓp Euclidean power means up to a (1+ε) factor, showing that ˜Θ(ε−2) samples for p=1, ˜Θ(ε−1) samples for 1<p<2, and ˜Θ(ε1−p) samples for p>2 is tight, answering a question of Cohen-Addad, Saulpic, and Schwiegelshohn. Second, we show that for $1
Cite this Paper
Related Material