Fast Algorithms for a New Relaxation of Optimal Transport

Moses Charikar, Beidi Chen, Christopher Ré, Erik Waingarten
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4831-4862, 2023.

Abstract

We introduce a new class of objectives for optimal transport computations of datasets in high-dimensional Euclidean spaces. The new objectives are parametrized by $\rho \geq 1$, and provide a metric space $\mathcal{R}_{\rho}(\cdot, \cdot)$ for discrete probability distributions in $\mathbb{R}^d$. As $\rho$ approaches $1$, the metric approaches the Earth Mover’s distance, but for $\rho$ larger than (but close to) $1$, admits significantly faster algorithms. Namely, for distributions $\mu$ and $\nu$ supported on $n$ and $m$ vectors in $\mathbb{R}^d$ of norm at most $r$ and any $\epsilon > 0$, we give an algorithm which outputs an additive $\epsilon r$ approximation to $\mathcal{R}_{\rho}(\mu, \nu)$ in time $(n+m) \cdot \mathrm{poly}((nm)^{(\rho-1)/\rho} \cdot 2^{\rho / (\rho-1)} / \epsilon)$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-charikar23a, title = {Fast Algorithms for a New Relaxation of Optimal Transport}, author = {Charikar, Moses and Chen, Beidi and R{\'e}, Christopher and Waingarten, Erik}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {4831--4862}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/charikar23a/charikar23a.pdf}, url = {https://proceedings.mlr.press/v195/charikar23a.html}, abstract = {We introduce a new class of objectives for optimal transport computations of datasets in high-dimensional Euclidean spaces. The new objectives are parametrized by $\rho \geq 1$, and provide a metric space $\mathcal{R}_{\rho}(\cdot, \cdot)$ for discrete probability distributions in $\mathbb{R}^d$. As $\rho$ approaches $1$, the metric approaches the Earth Mover’s distance, but for $\rho$ larger than (but close to) $1$, admits significantly faster algorithms. Namely, for distributions $\mu$ and $\nu$ supported on $n$ and $m$ vectors in $\mathbb{R}^d$ of norm at most $r$ and any $\epsilon > 0$, we give an algorithm which outputs an additive $\epsilon r$ approximation to $\mathcal{R}_{\rho}(\mu, \nu)$ in time $(n+m) \cdot \mathrm{poly}((nm)^{(\rho-1)/\rho} \cdot 2^{\rho / (\rho-1)} / \epsilon)$.} }
Endnote
%0 Conference Paper %T Fast Algorithms for a New Relaxation of Optimal Transport %A Moses Charikar %A Beidi Chen %A Christopher Ré %A Erik Waingarten %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-charikar23a %I PMLR %P 4831--4862 %U https://proceedings.mlr.press/v195/charikar23a.html %V 195 %X We introduce a new class of objectives for optimal transport computations of datasets in high-dimensional Euclidean spaces. The new objectives are parametrized by $\rho \geq 1$, and provide a metric space $\mathcal{R}_{\rho}(\cdot, \cdot)$ for discrete probability distributions in $\mathbb{R}^d$. As $\rho$ approaches $1$, the metric approaches the Earth Mover’s distance, but for $\rho$ larger than (but close to) $1$, admits significantly faster algorithms. Namely, for distributions $\mu$ and $\nu$ supported on $n$ and $m$ vectors in $\mathbb{R}^d$ of norm at most $r$ and any $\epsilon > 0$, we give an algorithm which outputs an additive $\epsilon r$ approximation to $\mathcal{R}_{\rho}(\mu, \nu)$ in time $(n+m) \cdot \mathrm{poly}((nm)^{(\rho-1)/\rho} \cdot 2^{\rho / (\rho-1)} / \epsilon)$.
APA
Charikar, M., Chen, B., Ré, C. & Waingarten, E.. (2023). Fast Algorithms for a New Relaxation of Optimal Transport. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:4831-4862 Available from https://proceedings.mlr.press/v195/charikar23a.html.

Related Material