Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization

David Gamarnik, Eren C. Kizildağ, Will Perkins, Changji Xu
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:3231-3263, 2023.

Abstract

For many computational problems involving randomness, intricate geometric features of the solution space have been used to rigorously rule out powerful classes of algorithms. This is often accomplished through the lens of the multi Overlap Gap Property ($m$-OGP), a rigorous barrier against algorithms exhibiting input stability. In this paper, we focus on the algorithmic tractability of two models: (i) discrepancy minimization, and (ii) the symmetric binary perceptron (\texttt{SBP}), a random constraint satisfaction problem as well as a toy model of a single-layer neural network.Our first focus is on the limits of online algorithms. By establishing and leveraging a novel geometrical barrier, we obtain sharp hardness guarantees against online algorithms for both the \texttt{SBP} and discrepancy minimization. Our results match the best known algorithmic guarantees, up to constant factors. Our second focus is on efficiently finding a constant discrepancy solution, given a random matrix $\mathcal{M}\in\R^{M\times n}$. In a smooth setting, where the entries of $\mathcal{M}$ are i.i.d.\,standard normal, we establish the presence of $m$-OGP for $n=\Theta(M\log M)$. Consequently, we rule out the class of stable algorithms at this value. These results give the first rigorous evidence towards \citet[Conjecture 1]{altschuler2021discrepancy}. Our methods use the intricate geometry of the solution space to prove tight hardness results for online algorithms. The barrier we establish is a novel variant of the $m$-OGP. Furthermore, it regards $m$-tuples of solutions with respect to correlated instances, with growing values of $m$, $m=\omega(1)$. Importantly, our results rule out online algorithms succeeding even with an exponentially small probability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-gamarnik23a, title = {Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization}, author = {Gamarnik, David and Kizilda{\u{g}}, Eren C. and Perkins, Will and Xu, Changji}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {3231--3263}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/gamarnik23a/gamarnik23a.pdf}, url = {https://proceedings.mlr.press/v195/gamarnik23a.html}, abstract = {For many computational problems involving randomness, intricate geometric features of the solution space have been used to rigorously rule out powerful classes of algorithms. This is often accomplished through the lens of the multi Overlap Gap Property ($m$-OGP), a rigorous barrier against algorithms exhibiting input stability. In this paper, we focus on the algorithmic tractability of two models: (i) discrepancy minimization, and (ii) the symmetric binary perceptron (\texttt{SBP}), a random constraint satisfaction problem as well as a toy model of a single-layer neural network.Our first focus is on the limits of online algorithms. By establishing and leveraging a novel geometrical barrier, we obtain sharp hardness guarantees against online algorithms for both the \texttt{SBP} and discrepancy minimization. Our results match the best known algorithmic guarantees, up to constant factors. Our second focus is on efficiently finding a constant discrepancy solution, given a random matrix $\mathcal{M}\in\R^{M\times n}$. In a smooth setting, where the entries of $\mathcal{M}$ are i.i.d.\,standard normal, we establish the presence of $m$-OGP for $n=\Theta(M\log M)$. Consequently, we rule out the class of stable algorithms at this value. These results give the first rigorous evidence towards \citet[Conjecture 1]{altschuler2021discrepancy}. Our methods use the intricate geometry of the solution space to prove tight hardness results for online algorithms. The barrier we establish is a novel variant of the $m$-OGP. Furthermore, it regards $m$-tuples of solutions with respect to correlated instances, with growing values of $m$, $m=\omega(1)$. Importantly, our results rule out online algorithms succeeding even with an exponentially small probability.} }
Endnote
%0 Conference Paper %T Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization %A David Gamarnik %A Eren C. Kizildağ %A Will Perkins %A Changji Xu %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-gamarnik23a %I PMLR %P 3231--3263 %U https://proceedings.mlr.press/v195/gamarnik23a.html %V 195 %X For many computational problems involving randomness, intricate geometric features of the solution space have been used to rigorously rule out powerful classes of algorithms. This is often accomplished through the lens of the multi Overlap Gap Property ($m$-OGP), a rigorous barrier against algorithms exhibiting input stability. In this paper, we focus on the algorithmic tractability of two models: (i) discrepancy minimization, and (ii) the symmetric binary perceptron (\texttt{SBP}), a random constraint satisfaction problem as well as a toy model of a single-layer neural network.Our first focus is on the limits of online algorithms. By establishing and leveraging a novel geometrical barrier, we obtain sharp hardness guarantees against online algorithms for both the \texttt{SBP} and discrepancy minimization. Our results match the best known algorithmic guarantees, up to constant factors. Our second focus is on efficiently finding a constant discrepancy solution, given a random matrix $\mathcal{M}\in\R^{M\times n}$. In a smooth setting, where the entries of $\mathcal{M}$ are i.i.d.\,standard normal, we establish the presence of $m$-OGP for $n=\Theta(M\log M)$. Consequently, we rule out the class of stable algorithms at this value. These results give the first rigorous evidence towards \citet[Conjecture 1]{altschuler2021discrepancy}. Our methods use the intricate geometry of the solution space to prove tight hardness results for online algorithms. The barrier we establish is a novel variant of the $m$-OGP. Furthermore, it regards $m$-tuples of solutions with respect to correlated instances, with growing values of $m$, $m=\omega(1)$. Importantly, our results rule out online algorithms succeeding even with an exponentially small probability.
APA
Gamarnik, D., Kizildağ, E.C., Perkins, W. & Xu, C.. (2023). Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:3231-3263 Available from https://proceedings.mlr.press/v195/gamarnik23a.html.

Related Material