DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory

Jerry Chee, Arturs Backurs, Rainie Heck, Li Zhang, Janardhan Kulkarni, Thomas Rothvoss, Sivakanth Gopi
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:924-951, 2025.

Abstract

Quantizing the weights of a neural network has two steps: (1) Finding a good low bit-complexity representation for weights (which we call the quantization grid) and (2) Rounding the original weights to values in the quantization grid. In this paper, we study the problem of rounding optimally given any quantization grid. The simplest and most commonly used way to round is Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can improve the quality of the quantized model significantly. We study the rounding problem from the lens of \emph{discrepancy theory}, which studies how well we can round a continuous solution to a discrete solution without affecting solution quality too much. We prove that given $m=\poly\left(\frac{\log n}{\epsilon}\right)$ samples from the data distribution, we can round nearly all $n$ model parameters such that the expected approximation error of the quantized model on the true data distribution is $\le \epsilon$ as long as the space of gradients of the original model is approximately low rank (which we empirically validate). Our algorithm is based on the famous Lovett-Meka algorithm from discrepancy theory and uses sticky Brownian motion to find a good rounding. We also give a simple and practical rounding algorithm called \emph{DiscQuant}, which is inspired by our theoretical insights. In our experiments, we demonstrate that DiscQuant significantly improves over the prior state-of-the-art rounding method called GPTQ and the baseline RTN over a range of benchmarks on Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed quantization grid with 3.25 bits per parameter using DiscQuant gets 64% accuracy on the GSM8k dataset, whereas GPTQ achieves 54% and RTN achieves 31% (the original model achieves 84%). We make our code available at \url{https://github.com/jerry-chee/DiscQuant}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-chee25a, title = {DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory}, author = {Chee, Jerry and Backurs, Arturs and Heck, Rainie and Zhang, Li and Kulkarni, Janardhan and Rothvoss, Thomas and Gopi, Sivakanth}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {924--951}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/chee25a/chee25a.pdf}, url = {https://proceedings.mlr.press/v291/chee25a.html}, abstract = { Quantizing the weights of a neural network has two steps: (1) Finding a good low bit-complexity representation for weights (which we call the quantization grid) and (2) Rounding the original weights to values in the quantization grid. In this paper, we study the problem of rounding optimally given any quantization grid. The simplest and most commonly used way to round is Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can improve the quality of the quantized model significantly. We study the rounding problem from the lens of \emph{discrepancy theory}, which studies how well we can round a continuous solution to a discrete solution without affecting solution quality too much. We prove that given $m=\poly\left(\frac{\log n}{\epsilon}\right)$ samples from the data distribution, we can round nearly all $n$ model parameters such that the expected approximation error of the quantized model on the true data distribution is $\le \epsilon$ as long as the space of gradients of the original model is approximately low rank (which we empirically validate). Our algorithm is based on the famous Lovett-Meka algorithm from discrepancy theory and uses sticky Brownian motion to find a good rounding. We also give a simple and practical rounding algorithm called \emph{DiscQuant}, which is inspired by our theoretical insights. In our experiments, we demonstrate that DiscQuant significantly improves over the prior state-of-the-art rounding method called GPTQ and the baseline RTN over a range of benchmarks on Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed quantization grid with 3.25 bits per parameter using DiscQuant gets 64% accuracy on the GSM8k dataset, whereas GPTQ achieves 54% and RTN achieves 31% (the original model achieves 84%). We make our code available at \url{https://github.com/jerry-chee/DiscQuant}. } }
Endnote
%0 Conference Paper %T DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory %A Jerry Chee %A Arturs Backurs %A Rainie Heck %A Li Zhang %A Janardhan Kulkarni %A Thomas Rothvoss %A Sivakanth Gopi %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-chee25a %I PMLR %P 924--951 %U https://proceedings.mlr.press/v291/chee25a.html %V 291 %X Quantizing the weights of a neural network has two steps: (1) Finding a good low bit-complexity representation for weights (which we call the quantization grid) and (2) Rounding the original weights to values in the quantization grid. In this paper, we study the problem of rounding optimally given any quantization grid. The simplest and most commonly used way to round is Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can improve the quality of the quantized model significantly. We study the rounding problem from the lens of \emph{discrepancy theory}, which studies how well we can round a continuous solution to a discrete solution without affecting solution quality too much. We prove that given $m=\poly\left(\frac{\log n}{\epsilon}\right)$ samples from the data distribution, we can round nearly all $n$ model parameters such that the expected approximation error of the quantized model on the true data distribution is $\le \epsilon$ as long as the space of gradients of the original model is approximately low rank (which we empirically validate). Our algorithm is based on the famous Lovett-Meka algorithm from discrepancy theory and uses sticky Brownian motion to find a good rounding. We also give a simple and practical rounding algorithm called \emph{DiscQuant}, which is inspired by our theoretical insights. In our experiments, we demonstrate that DiscQuant significantly improves over the prior state-of-the-art rounding method called GPTQ and the baseline RTN over a range of benchmarks on Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed quantization grid with 3.25 bits per parameter using DiscQuant gets 64% accuracy on the GSM8k dataset, whereas GPTQ achieves 54% and RTN achieves 31% (the original model achieves 84%). We make our code available at \url{https://github.com/jerry-chee/DiscQuant}.
APA
Chee, J., Backurs, A., Heck, R., Zhang, L., Kulkarni, J., Rothvoss, T. & Gopi, S.. (2025). DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:924-951 Available from https://proceedings.mlr.press/v291/chee25a.html.

Related Material