Lossless or Quantized Boosting with Integer Arithmetic
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:48294838, 2019.
Abstract
In supervised learning, efficiency often starts with the choice of a good loss: support vector machines popularised Hinge loss, Adaboost popularised the exponential loss, etc. Recent trends in machine learning have highlighted the necessity for training routines to meet tight requirements on communication, bandwidth, energy, operations, encoding, among others. Fitting the often decadesold state of the art training routines into these new constraints does not go without pain and uncertainty or reduction in the original guarantees. Our paper starts with the design of a new strictly proper canonical, twice differentiable loss called the Qloss. Importantly, its mirror update over (arbitrary) rational inputs uses only integer arithmetics – more precisely, the sole use of $+, , /, \times, .$. We build a learning algorithm which is able, under mild assumptions, to achieve a lossless boostingcompliant training. We give conditions for a quantization of its main memory footprint, weights, to be done while keeping the whole algorithm boostingcompliant. Experiments display that the algorithm can achieve a fast convergence during the early boosting rounds compared to AdaBoost, even with a weight storage that can be 30+ times smaller. Lastly, we show that the Bayes risk of the Qloss can be used as node splitting criterion for decision trees and guarantees optimal boosting convergence.
Related Material


