Discrepancy Minimization in Input-Sparsity Time

Yichuan Deng, Xiaoyu Li, Zhao Song, Omri Weinstein
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:13181-13236, 2025.

Abstract

A recent work by [Larsen, SODA 2023] introduced a faster combinatorial alternative to Bansal’s SDP algorithm for finding a coloring $x \in \\{-1, 1\\}^n$ that approximately minimizes the discrepancy $\mathrm{disc}(A, x) := \\|{A} x \\|_{\infty}$ of a real-valued $m \times n$ matrix $A$. Larsen’s algorithm runs in $\widetilde{O}(mn^2)$ time compared to Bansal’s $\widetilde{O}(mn^{4.5})$-time algorithm, with a slightly weaker logarithmic approximation ratio in terms of the hereditary discrepancy of $A$ [Bansal, FOCS 2010]. We present a combinatorial $\widetilde{O}(\mathrm{nnz}(A) + n^3)$-time algorithm with the same approximation guarantee as Larsen’s, optimal for tall matrices where $m = \mathrm{poly}(n)$. Using a more intricate analysis and fast matrix multiplication, we further achieve a runtime of $\widetilde{O}(\mathrm{nnz}(A) + n^{2.53})$, breaking the cubic barrier for square matrices and surpassing the limitations of linear-programming approaches [Eldan and Singh, RS&A 2018]. Our algorithm relies on two key ideas: (i) a new sketching technique for finding a projection matrix with a short $\ell_2$-basis using implicit leverage-score sampling, and (ii) a data structure for efficiently implementing the iterative Edge-Walk partial-coloring algorithm [Lovett and Meka, SICOMP 2015], and using an alternative analysis to enable “lazy” batch updates with low-rank corrections. Our results nearly close the computational gap between real-valued and binary matrices, for which input-sparsity time coloring was recently obtained by [Jain, Sah and Sawhney, SODA 2023].

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-deng25f, title = {Discrepancy Minimization in Input-Sparsity Time}, author = {Deng, Yichuan and Li, Xiaoyu and Song, Zhao and Weinstein, Omri}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {13181--13236}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/deng25f/deng25f.pdf}, url = {https://proceedings.mlr.press/v267/deng25f.html}, abstract = {A recent work by [Larsen, SODA 2023] introduced a faster combinatorial alternative to Bansal’s SDP algorithm for finding a coloring $x \in \\{-1, 1\\}^n$ that approximately minimizes the discrepancy $\mathrm{disc}(A, x) := \\|{A} x \\|_{\infty}$ of a real-valued $m \times n$ matrix $A$. Larsen’s algorithm runs in $\widetilde{O}(mn^2)$ time compared to Bansal’s $\widetilde{O}(mn^{4.5})$-time algorithm, with a slightly weaker logarithmic approximation ratio in terms of the hereditary discrepancy of $A$ [Bansal, FOCS 2010]. We present a combinatorial $\widetilde{O}(\mathrm{nnz}(A) + n^3)$-time algorithm with the same approximation guarantee as Larsen’s, optimal for tall matrices where $m = \mathrm{poly}(n)$. Using a more intricate analysis and fast matrix multiplication, we further achieve a runtime of $\widetilde{O}(\mathrm{nnz}(A) + n^{2.53})$, breaking the cubic barrier for square matrices and surpassing the limitations of linear-programming approaches [Eldan and Singh, RS&A 2018]. Our algorithm relies on two key ideas: (i) a new sketching technique for finding a projection matrix with a short $\ell_2$-basis using implicit leverage-score sampling, and (ii) a data structure for efficiently implementing the iterative Edge-Walk partial-coloring algorithm [Lovett and Meka, SICOMP 2015], and using an alternative analysis to enable “lazy” batch updates with low-rank corrections. Our results nearly close the computational gap between real-valued and binary matrices, for which input-sparsity time coloring was recently obtained by [Jain, Sah and Sawhney, SODA 2023].} }
Endnote
%0 Conference Paper %T Discrepancy Minimization in Input-Sparsity Time %A Yichuan Deng %A Xiaoyu Li %A Zhao Song %A Omri Weinstein %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-deng25f %I PMLR %P 13181--13236 %U https://proceedings.mlr.press/v267/deng25f.html %V 267 %X A recent work by [Larsen, SODA 2023] introduced a faster combinatorial alternative to Bansal’s SDP algorithm for finding a coloring $x \in \\{-1, 1\\}^n$ that approximately minimizes the discrepancy $\mathrm{disc}(A, x) := \\|{A} x \\|_{\infty}$ of a real-valued $m \times n$ matrix $A$. Larsen’s algorithm runs in $\widetilde{O}(mn^2)$ time compared to Bansal’s $\widetilde{O}(mn^{4.5})$-time algorithm, with a slightly weaker logarithmic approximation ratio in terms of the hereditary discrepancy of $A$ [Bansal, FOCS 2010]. We present a combinatorial $\widetilde{O}(\mathrm{nnz}(A) + n^3)$-time algorithm with the same approximation guarantee as Larsen’s, optimal for tall matrices where $m = \mathrm{poly}(n)$. Using a more intricate analysis and fast matrix multiplication, we further achieve a runtime of $\widetilde{O}(\mathrm{nnz}(A) + n^{2.53})$, breaking the cubic barrier for square matrices and surpassing the limitations of linear-programming approaches [Eldan and Singh, RS&A 2018]. Our algorithm relies on two key ideas: (i) a new sketching technique for finding a projection matrix with a short $\ell_2$-basis using implicit leverage-score sampling, and (ii) a data structure for efficiently implementing the iterative Edge-Walk partial-coloring algorithm [Lovett and Meka, SICOMP 2015], and using an alternative analysis to enable “lazy” batch updates with low-rank corrections. Our results nearly close the computational gap between real-valued and binary matrices, for which input-sparsity time coloring was recently obtained by [Jain, Sah and Sawhney, SODA 2023].
APA
Deng, Y., Li, X., Song, Z. & Weinstein, O.. (2025). Discrepancy Minimization in Input-Sparsity Time. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:13181-13236 Available from https://proceedings.mlr.press/v267/deng25f.html.

Related Material