Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

Bowen Lei, Dongkuan Xu, Ruqi Zhang, Shuren He, Bani Mallick
Conference on Parsimony and Learning, PMLR 234:341-378, 2024.

Abstract

Despite impressive performance, deep neural networks require significant memory and computation costs, prohibiting their application in resource-constrained scenarios. Sparse training is one of the most common techniques to reduce these costs, however, the sparsity constraints add difficulty to the optimization, resulting in an increase in training time and instability. In this work, we aim to overcome this problem and achieve space-time co-efficiency. To accelerate and stabilize the convergence of sparse training, we analyze the gradient changes and develop an adaptive gradient correction method. Specifically, we approximate the correlation between the current and previous gradients, which is used to balance the two gradients to obtain a corrected gradient. Our method can be used with the most popular sparse training pipelines under both standard and adversarial setups. Theoretically, we prove that our method can accelerate the convergence rate of sparse training. Extensive experiments on multiple datasets, model architectures, and sparsities demonstrate that our method outperforms leading sparse training methods by up to \textbf{5.0%} in accuracy given the same number of training epochs, and reduces the number of training epochs by up to \textbf{52.1%} to achieve the same accuracy. Our code is available on: \url{https://github.com/StevenBoys/AGENT}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-lei24a, title = {Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction}, author = {Lei, Bowen and Xu, Dongkuan and Zhang, Ruqi and He, Shuren and Mallick, Bani}, booktitle = {Conference on Parsimony and Learning}, pages = {341--378}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/lei24a/lei24a.pdf}, url = {https://proceedings.mlr.press/v234/lei24a.html}, abstract = {Despite impressive performance, deep neural networks require significant memory and computation costs, prohibiting their application in resource-constrained scenarios. Sparse training is one of the most common techniques to reduce these costs, however, the sparsity constraints add difficulty to the optimization, resulting in an increase in training time and instability. In this work, we aim to overcome this problem and achieve space-time co-efficiency. To accelerate and stabilize the convergence of sparse training, we analyze the gradient changes and develop an adaptive gradient correction method. Specifically, we approximate the correlation between the current and previous gradients, which is used to balance the two gradients to obtain a corrected gradient. Our method can be used with the most popular sparse training pipelines under both standard and adversarial setups. Theoretically, we prove that our method can accelerate the convergence rate of sparse training. Extensive experiments on multiple datasets, model architectures, and sparsities demonstrate that our method outperforms leading sparse training methods by up to \textbf{5.0%} in accuracy given the same number of training epochs, and reduces the number of training epochs by up to \textbf{52.1%} to achieve the same accuracy. Our code is available on: \url{https://github.com/StevenBoys/AGENT}.} }
Endnote
%0 Conference Paper %T Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction %A Bowen Lei %A Dongkuan Xu %A Ruqi Zhang %A Shuren He %A Bani Mallick %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-lei24a %I PMLR %P 341--378 %U https://proceedings.mlr.press/v234/lei24a.html %V 234 %X Despite impressive performance, deep neural networks require significant memory and computation costs, prohibiting their application in resource-constrained scenarios. Sparse training is one of the most common techniques to reduce these costs, however, the sparsity constraints add difficulty to the optimization, resulting in an increase in training time and instability. In this work, we aim to overcome this problem and achieve space-time co-efficiency. To accelerate and stabilize the convergence of sparse training, we analyze the gradient changes and develop an adaptive gradient correction method. Specifically, we approximate the correlation between the current and previous gradients, which is used to balance the two gradients to obtain a corrected gradient. Our method can be used with the most popular sparse training pipelines under both standard and adversarial setups. Theoretically, we prove that our method can accelerate the convergence rate of sparse training. Extensive experiments on multiple datasets, model architectures, and sparsities demonstrate that our method outperforms leading sparse training methods by up to \textbf{5.0%} in accuracy given the same number of training epochs, and reduces the number of training epochs by up to \textbf{52.1%} to achieve the same accuracy. Our code is available on: \url{https://github.com/StevenBoys/AGENT}.
APA
Lei, B., Xu, D., Zhang, R., He, S. & Mallick, B.. (2024). Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:341-378 Available from https://proceedings.mlr.press/v234/lei24a.html.

Related Material