ASkewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

Louis Leconte, Sholom Schechtman, Eric Moulines
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3644-3663, 2023.

Abstract

In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-leconte23a, title = {ASkewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks}, author = {Leconte, Louis and Schechtman, Sholom and Moulines, Eric}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {3644--3663}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/leconte23a/leconte23a.pdf}, url = {https://proceedings.mlr.press/v206/leconte23a.html}, abstract = {In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.} }
Endnote
%0 Conference Paper %T ASkewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks %A Louis Leconte %A Sholom Schechtman %A Eric Moulines %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-leconte23a %I PMLR %P 3644--3663 %U https://proceedings.mlr.press/v206/leconte23a.html %V 206 %X In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
APA
Leconte, L., Schechtman, S. & Moulines, E.. (2023). ASkewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:3644-3663 Available from https://proceedings.mlr.press/v206/leconte23a.html.

Related Material