[edit]
Hadamard Domain Training with Integers for Class Incremental Quantized Learning
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:198-220, 2025.
Abstract
Continual learning (CL) implemented directly on-device is crucial for practical deployment of applications to battery powered devices, where privacy needs must be balanced with personalization, and agility in adapting to new data. Existing CL techniques can be cost-prohibitive on such devices requiring quantized operations to enable practical deployment. However, as we show commonly used fully quantized training (FQT) solutions do not converge when applied to CL with low-precision hardware. We propose Hadamard Domain Quantized Training (HDQT), that uses the Hadamard transform to facilitate FQT with 4-bit integer operands. HDQT enables low-precision, on-device training where other FQT solutions fail. An examination of gradient alignment reveals that for early feature detection layers, HDQT gradients are better aligned to the unquantized baselines than those generated by other FQT methods. This improved alignment translates to consistently better performance over the course of learning, reflected in the training trajectories through the model loss-landscape. Numerical experiments conducted on Human Activity Recognition (HAR) datasets reveal a ≪ 1% average accuracy reduction across various competitive CL methods, even under aggressive 4-bit quantization with 8-bit accumulators. Additionally, on CIFAR100 there is no loss of accuracy observed when the accumulator precision is relaxed to 12 bits for competitive non-dynamic architecture CL methods.