Generalized additive models via direct optimization of regularized decision stump forests

Magzhan Gabidolla, Miguel Á. Carreira-Perpiñán
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18047-18061, 2025.

Abstract

We explore ensembles of axis-aligned decision stumps, which can be viewed as a generalized additive model (GAM). In this model, stumps utilizing the same feature are grouped to form a shape function for that feature. Instead of relying on boosting or bagging, we employ alternating optimization to learn a fixed-size stump forest. We optimize the parameters of each stump exactly through enumeration, given the other stumps are fixed. For fixed stump splits, the leaf values are optimized jointly by solving a convex problem. To address the overfitting issue inherent in naive optimization of stump forests, we propose effective regularization techniques. Our regularized stump forests achieve accuracy comparable to state-of-the-art GAM methods while using fewer parameters. This work is the first to successfully learn stump forests without employing traditional ensembling techniques like bagging or boosting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gabidolla25a, title = {Generalized additive models via direct optimization of regularized decision stump forests}, author = {Gabidolla, Magzhan and Carreira-Perpi\~{n}\'{a}n, Miguel \'{A}.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18047--18061}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gabidolla25a/gabidolla25a.pdf}, url = {https://proceedings.mlr.press/v267/gabidolla25a.html}, abstract = {We explore ensembles of axis-aligned decision stumps, which can be viewed as a generalized additive model (GAM). In this model, stumps utilizing the same feature are grouped to form a shape function for that feature. Instead of relying on boosting or bagging, we employ alternating optimization to learn a fixed-size stump forest. We optimize the parameters of each stump exactly through enumeration, given the other stumps are fixed. For fixed stump splits, the leaf values are optimized jointly by solving a convex problem. To address the overfitting issue inherent in naive optimization of stump forests, we propose effective regularization techniques. Our regularized stump forests achieve accuracy comparable to state-of-the-art GAM methods while using fewer parameters. This work is the first to successfully learn stump forests without employing traditional ensembling techniques like bagging or boosting.} }
Endnote
%0 Conference Paper %T Generalized additive models via direct optimization of regularized decision stump forests %A Magzhan Gabidolla %A Miguel Á. Carreira-Perpiñán %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gabidolla25a %I PMLR %P 18047--18061 %U https://proceedings.mlr.press/v267/gabidolla25a.html %V 267 %X We explore ensembles of axis-aligned decision stumps, which can be viewed as a generalized additive model (GAM). In this model, stumps utilizing the same feature are grouped to form a shape function for that feature. Instead of relying on boosting or bagging, we employ alternating optimization to learn a fixed-size stump forest. We optimize the parameters of each stump exactly through enumeration, given the other stumps are fixed. For fixed stump splits, the leaf values are optimized jointly by solving a convex problem. To address the overfitting issue inherent in naive optimization of stump forests, we propose effective regularization techniques. Our regularized stump forests achieve accuracy comparable to state-of-the-art GAM methods while using fewer parameters. This work is the first to successfully learn stump forests without employing traditional ensembling techniques like bagging or boosting.
APA
Gabidolla, M. & Carreira-Perpiñán, M.Á.. (2025). Generalized additive models via direct optimization of regularized decision stump forests. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18047-18061 Available from https://proceedings.mlr.press/v267/gabidolla25a.html.

Related Material