Correlated Quantization for Faster Nonconvex Distributed Optimization

Andrei Panferov, Yury Demidovich, Ahmad Rammal, Peter Richtárik
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:3361-3387, 2025.

Abstract

Quantization [Alistarh et al., 2017] is an important (stochastic) compression technique that reduces the volume of transmitted bits during each communication round in distributed model training. Suresh et al. [2022] introduce correlated quantizers and show their advantages over independent counterparts by analyzing distributed SGD communication complexity. We analyze the fore- front distributed non-convex optimization algorithm MARINA [Gorbunov et al., 2022] utilizing the proposed correlated quantizers and show that it outperforms the original MARINA and distributed SGD of Suresh et al. [2022] with regard to the communication complexity. We significantly re- fine the original analysis of MARINA without any additional assumptions using the weighted Hessian variance [Tyurin et al., 2022], and then we expand the theoretical framework of MARINA to accommodate a substantially broader range of potentially correlated and biased compressors, thus dilating the applicability of the method beyond the conventional independent unbiased compressor setup. Extensive experimental results corroborate our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-panferov25a, title = {Correlated Quantization for Faster Nonconvex Distributed Optimization}, author = {Panferov, Andrei and Demidovich, Yury and Rammal, Ahmad and Richt\'{a}rik, Peter}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {3361--3387}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/panferov25a/panferov25a.pdf}, url = {https://proceedings.mlr.press/v286/panferov25a.html}, abstract = {Quantization [Alistarh et al., 2017] is an important (stochastic) compression technique that reduces the volume of transmitted bits during each communication round in distributed model training. Suresh et al. [2022] introduce correlated quantizers and show their advantages over independent counterparts by analyzing distributed SGD communication complexity. We analyze the fore- front distributed non-convex optimization algorithm MARINA [Gorbunov et al., 2022] utilizing the proposed correlated quantizers and show that it outperforms the original MARINA and distributed SGD of Suresh et al. [2022] with regard to the communication complexity. We significantly re- fine the original analysis of MARINA without any additional assumptions using the weighted Hessian variance [Tyurin et al., 2022], and then we expand the theoretical framework of MARINA to accommodate a substantially broader range of potentially correlated and biased compressors, thus dilating the applicability of the method beyond the conventional independent unbiased compressor setup. Extensive experimental results corroborate our theoretical findings.} }
Endnote
%0 Conference Paper %T Correlated Quantization for Faster Nonconvex Distributed Optimization %A Andrei Panferov %A Yury Demidovich %A Ahmad Rammal %A Peter Richtárik %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-panferov25a %I PMLR %P 3361--3387 %U https://proceedings.mlr.press/v286/panferov25a.html %V 286 %X Quantization [Alistarh et al., 2017] is an important (stochastic) compression technique that reduces the volume of transmitted bits during each communication round in distributed model training. Suresh et al. [2022] introduce correlated quantizers and show their advantages over independent counterparts by analyzing distributed SGD communication complexity. We analyze the fore- front distributed non-convex optimization algorithm MARINA [Gorbunov et al., 2022] utilizing the proposed correlated quantizers and show that it outperforms the original MARINA and distributed SGD of Suresh et al. [2022] with regard to the communication complexity. We significantly re- fine the original analysis of MARINA without any additional assumptions using the weighted Hessian variance [Tyurin et al., 2022], and then we expand the theoretical framework of MARINA to accommodate a substantially broader range of potentially correlated and biased compressors, thus dilating the applicability of the method beyond the conventional independent unbiased compressor setup. Extensive experimental results corroborate our theoretical findings.
APA
Panferov, A., Demidovich, Y., Rammal, A. & Richtárik, P.. (2025). Correlated Quantization for Faster Nonconvex Distributed Optimization. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:3361-3387 Available from https://proceedings.mlr.press/v286/panferov25a.html.

Related Material