Hadamard Domain Training with Integers for Class Incremental Quantized Learning

Martin Schiemer, Clemens JS Schaefer, Mark James Horeni, Yu Emma Wang, Juan Ye, Siddharth Joshi
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:198-220, 2025.

Abstract

Continual learning (CL) implemented directly on-device is crucial for practical deployment of applications to battery powered devices, where privacy needs must be balanced with personalization, and agility in adapting to new data. Existing CL techniques can be cost-prohibitive on such devices requiring quantized operations to enable practical deployment. However, as we show commonly used fully quantized training (FQT) solutions do not converge when applied to CL with low-precision hardware. We propose Hadamard Domain Quantized Training (HDQT), that uses the Hadamard transform to facilitate FQT with 4-bit integer operands. HDQT enables low-precision, on-device training where other FQT solutions fail. An examination of gradient alignment reveals that for early feature detection layers, HDQT gradients are better aligned to the unquantized baselines than those generated by other FQT methods. This improved alignment translates to consistently better performance over the course of learning, reflected in the training trajectories through the model loss-landscape. Numerical experiments conducted on Human Activity Recognition (HAR) datasets reveal a ≪ 1% average accuracy reduction across various competitive CL methods, even under aggressive 4-bit quantization with 8-bit accumulators. Additionally, on CIFAR100 there is no loss of accuracy observed when the accumulator precision is relaxed to 12 bits for competitive non-dynamic architecture CL methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-schiemer25a, title = {Hadamard Domain Training with Integers for Class Incremental Quantized Learning}, author = {Schiemer, Martin and Schaefer, Clemens JS and Horeni, Mark James and Wang, Yu Emma and Ye, Juan and Joshi, Siddharth}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {198--220}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/schiemer25a/schiemer25a.pdf}, url = {https://proceedings.mlr.press/v274/schiemer25a.html}, abstract = {Continual learning (CL) implemented directly on-device is crucial for practical deployment of applications to battery powered devices, where privacy needs must be balanced with personalization, and agility in adapting to new data. Existing CL techniques can be cost-prohibitive on such devices requiring quantized operations to enable practical deployment. However, as we show commonly used fully quantized training (FQT) solutions do not converge when applied to CL with low-precision hardware. We propose Hadamard Domain Quantized Training (HDQT), that uses the Hadamard transform to facilitate FQT with 4-bit integer operands. HDQT enables low-precision, on-device training where other FQT solutions fail. An examination of gradient alignment reveals that for early feature detection layers, HDQT gradients are better aligned to the unquantized baselines than those generated by other FQT methods. This improved alignment translates to consistently better performance over the course of learning, reflected in the training trajectories through the model loss-landscape. Numerical experiments conducted on Human Activity Recognition (HAR) datasets reveal a ≪ 1% average accuracy reduction across various competitive CL methods, even under aggressive 4-bit quantization with 8-bit accumulators. Additionally, on CIFAR100 there is no loss of accuracy observed when the accumulator precision is relaxed to 12 bits for competitive non-dynamic architecture CL methods.} }
Endnote
%0 Conference Paper %T Hadamard Domain Training with Integers for Class Incremental Quantized Learning %A Martin Schiemer %A Clemens JS Schaefer %A Mark James Horeni %A Yu Emma Wang %A Juan Ye %A Siddharth Joshi %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-schiemer25a %I PMLR %P 198--220 %U https://proceedings.mlr.press/v274/schiemer25a.html %V 274 %X Continual learning (CL) implemented directly on-device is crucial for practical deployment of applications to battery powered devices, where privacy needs must be balanced with personalization, and agility in adapting to new data. Existing CL techniques can be cost-prohibitive on such devices requiring quantized operations to enable practical deployment. However, as we show commonly used fully quantized training (FQT) solutions do not converge when applied to CL with low-precision hardware. We propose Hadamard Domain Quantized Training (HDQT), that uses the Hadamard transform to facilitate FQT with 4-bit integer operands. HDQT enables low-precision, on-device training where other FQT solutions fail. An examination of gradient alignment reveals that for early feature detection layers, HDQT gradients are better aligned to the unquantized baselines than those generated by other FQT methods. This improved alignment translates to consistently better performance over the course of learning, reflected in the training trajectories through the model loss-landscape. Numerical experiments conducted on Human Activity Recognition (HAR) datasets reveal a ≪ 1% average accuracy reduction across various competitive CL methods, even under aggressive 4-bit quantization with 8-bit accumulators. Additionally, on CIFAR100 there is no loss of accuracy observed when the accumulator precision is relaxed to 12 bits for competitive non-dynamic architecture CL methods.
APA
Schiemer, M., Schaefer, C.J., Horeni, M.J., Wang, Y.E., Ye, J. & Joshi, S.. (2025). Hadamard Domain Training with Integers for Class Incremental Quantized Learning. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:198-220 Available from https://proceedings.mlr.press/v274/schiemer25a.html.

Related Material