Stochastic Gradient Trees

Henry Gouk, Bernhard Pfahringer, Eibe Frank
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:1094-1109, 2019.

Abstract

We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, our method operates in the incremental learning setting rather than the batch learning setting, and does not make use of soft splits or require the construction of a new tree for every update. We demonstrate how one can apply these decision trees to different problems by changing only the loss function, using classification, regression, and multi-instance learning as example applications. In the experimental evaluation, our method performs similarly to standard incremental classification trees, outperforms state of the art incremental regression trees, and achieves comparable performance with batch multi-instance learning methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-gouk19a, title = {Stochastic Gradient Trees}, author = {Gouk, Henry and Pfahringer, Bernhard and Frank, Eibe}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {1094--1109}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/gouk19a/gouk19a.pdf}, url = {https://proceedings.mlr.press/v101/gouk19a.html}, abstract = {We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, our method operates in the incremental learning setting rather than the batch learning setting, and does not make use of soft splits or require the construction of a new tree for every update. We demonstrate how one can apply these decision trees to different problems by changing only the loss function, using classification, regression, and multi-instance learning as example applications. In the experimental evaluation, our method performs similarly to standard incremental classification trees, outperforms state of the art incremental regression trees, and achieves comparable performance with batch multi-instance learning methods.} }
Endnote
%0 Conference Paper %T Stochastic Gradient Trees %A Henry Gouk %A Bernhard Pfahringer %A Eibe Frank %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-gouk19a %I PMLR %P 1094--1109 %U https://proceedings.mlr.press/v101/gouk19a.html %V 101 %X We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, our method operates in the incremental learning setting rather than the batch learning setting, and does not make use of soft splits or require the construction of a new tree for every update. We demonstrate how one can apply these decision trees to different problems by changing only the loss function, using classification, regression, and multi-instance learning as example applications. In the experimental evaluation, our method performs similarly to standard incremental classification trees, outperforms state of the art incremental regression trees, and achieves comparable performance with batch multi-instance learning methods.
APA
Gouk, H., Pfahringer, B. & Frank, E.. (2019). Stochastic Gradient Trees. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:1094-1109 Available from https://proceedings.mlr.press/v101/gouk19a.html.

Related Material