Compressed Rule Ensemble Learning

Malte Nalenz, Thomas Augustin
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9998-10014, 2022.

Abstract

Ensembles of decision rules extracted from tree ensembles, like RuleFit, promise a good trade-off between predictive performance and model simplicity. However, they are affected by competing interests: While a sufficiently large number of binary, non-smooth rules is necessary to fit smooth, well generalizing decision boundaries, a too high number of rules in the ensemble severely jeopardizes interpretability. As a way out of this dilemma, we propose to take an extra step in the rule extraction step and compress clusters of similar rules into ensemble rules. The outputs of the individual rules in each cluster are pooled to produce a single soft output, reflecting the original ensemble’s marginal smoothing behaviour. The final model, that we call Compressed Rule Ensemble (CRE), fits a linear combination of ensemble rules. We empirically show that CRE is both sparse and accurate on various datasets, carrying over the ensemble behaviour while remaining interpretable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-nalenz22a, title = { Compressed Rule Ensemble Learning }, author = {Nalenz, Malte and Augustin, Thomas}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9998--10014}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/nalenz22a/nalenz22a.pdf}, url = {https://proceedings.mlr.press/v151/nalenz22a.html}, abstract = { Ensembles of decision rules extracted from tree ensembles, like RuleFit, promise a good trade-off between predictive performance and model simplicity. However, they are affected by competing interests: While a sufficiently large number of binary, non-smooth rules is necessary to fit smooth, well generalizing decision boundaries, a too high number of rules in the ensemble severely jeopardizes interpretability. As a way out of this dilemma, we propose to take an extra step in the rule extraction step and compress clusters of similar rules into ensemble rules. The outputs of the individual rules in each cluster are pooled to produce a single soft output, reflecting the original ensemble’s marginal smoothing behaviour. The final model, that we call Compressed Rule Ensemble (CRE), fits a linear combination of ensemble rules. We empirically show that CRE is both sparse and accurate on various datasets, carrying over the ensemble behaviour while remaining interpretable. } }
Endnote
%0 Conference Paper %T Compressed Rule Ensemble Learning %A Malte Nalenz %A Thomas Augustin %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-nalenz22a %I PMLR %P 9998--10014 %U https://proceedings.mlr.press/v151/nalenz22a.html %V 151 %X Ensembles of decision rules extracted from tree ensembles, like RuleFit, promise a good trade-off between predictive performance and model simplicity. However, they are affected by competing interests: While a sufficiently large number of binary, non-smooth rules is necessary to fit smooth, well generalizing decision boundaries, a too high number of rules in the ensemble severely jeopardizes interpretability. As a way out of this dilemma, we propose to take an extra step in the rule extraction step and compress clusters of similar rules into ensemble rules. The outputs of the individual rules in each cluster are pooled to produce a single soft output, reflecting the original ensemble’s marginal smoothing behaviour. The final model, that we call Compressed Rule Ensemble (CRE), fits a linear combination of ensemble rules. We empirically show that CRE is both sparse and accurate on various datasets, carrying over the ensemble behaviour while remaining interpretable.
APA
Nalenz, M. & Augustin, T.. (2022). Compressed Rule Ensemble Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9998-10014 Available from https://proceedings.mlr.press/v151/nalenz22a.html.

Related Material