Quickly Boosting Decision Trees – Pruning Underachieving Features Early

Ron Appel, Thomas Fuchs, Piotr Dollar, Pietro Perona
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):594-602, 2013.

Abstract

Boosted decision trees are one of the most popular and successful learning techniques used today. While exhibiting fast speeds at test time, relatively slow training makes them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early on in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it may be used in conjunction with existing Boosting algorithms and other sampling heuristics to achieve even greater speedups.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-appel13, title = {Quickly Boosting Decision Trees -- Pruning Underachieving Features Early}, author = {Appel, Ron and Fuchs, Thomas and Dollar, Piotr and Perona, Pietro}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {594--602}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/appel13.pdf}, url = {https://proceedings.mlr.press/v28/appel13.html}, abstract = {Boosted decision trees are one of the most popular and successful learning techniques used today. While exhibiting fast speeds at test time, relatively slow training makes them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early on in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it may be used in conjunction with existing Boosting algorithms and other sampling heuristics to achieve even greater speedups.} }
Endnote
%0 Conference Paper %T Quickly Boosting Decision Trees – Pruning Underachieving Features Early %A Ron Appel %A Thomas Fuchs %A Piotr Dollar %A Pietro Perona %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-appel13 %I PMLR %P 594--602 %U https://proceedings.mlr.press/v28/appel13.html %V 28 %N 3 %X Boosted decision trees are one of the most popular and successful learning techniques used today. While exhibiting fast speeds at test time, relatively slow training makes them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early on in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it may be used in conjunction with existing Boosting algorithms and other sampling heuristics to achieve even greater speedups.
RIS
TY - CPAPER TI - Quickly Boosting Decision Trees – Pruning Underachieving Features Early AU - Ron Appel AU - Thomas Fuchs AU - Piotr Dollar AU - Pietro Perona BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-appel13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 594 EP - 602 L1 - http://proceedings.mlr.press/v28/appel13.pdf UR - https://proceedings.mlr.press/v28/appel13.html AB - Boosted decision trees are one of the most popular and successful learning techniques used today. While exhibiting fast speeds at test time, relatively slow training makes them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early on in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it may be used in conjunction with existing Boosting algorithms and other sampling heuristics to achieve even greater speedups. ER -
APA
Appel, R., Fuchs, T., Dollar, P. & Perona, P.. (2013). Quickly Boosting Decision Trees – Pruning Underachieving Features Early. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):594-602 Available from https://proceedings.mlr.press/v28/appel13.html.

Related Material