Adjusting for Multiple Testing in Decision Tree Pruning

David Jensen
Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, PMLR R1:295-302, 1997.

Abstract

Overfitting is a widely observed pathology of induction algorithms. Overfitted models contain unnecessary structure that reflects nothing more than random variation in the data sample used to construct the model. Portions of these models are literally wrong, and can mislead users. Overfitted models are less efficient to store and use than their correctly-sized counterparts. Finally, overfitting can reduce the accuracy of induced models on new data [14, 7]. For induction algorithms that build decision trees [1, 13, 15], pruning is a common approach to correct overfitting. Pruning techniques take an induced tree, examine individual subtrees, and remove those subtrees deemed to be unnecessary. While pruning techniques can differ in several respects, they primarily differ in the criterion used to judge subtrees. Many criteria have been proposed, including statistical significance tests [13], corrected error estimates [15], and minimum description length calculations [12]. Most common pruning techniques, however, do not account for one potentially important factor - multiple testing. Multiple testing occurs whenever an induction algorithm examines several candidate models and selects the one that best accords with the data. Any search process necessarily involves multiple testing, and most common induction algorithms involve implicit or explicit search through a space of candidate models. In the case of decision trees, search involves examining many possible subtrees and selecting the best one. Pruning techniques need to account for the number of subtrees examined, because such multiple testing affects the apparent accuracy of models on training data [8]. This paper examines the importance of adjusting for multiple testing. Specifically, it examines the effectiveness of one particular pruning method - \emph{bonferroni pruning}. Bonferroni pruning adjusts the results of a standard significance test to account for the number of subtrees examined at a particular node of a decision tree. Evidence that bonferroni pruning leads to better models supports the hypothesis that multiple testing is an important cause of overfitting.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR1-jensen97a, title = {Adjusting for Multiple Testing in Decision Tree Pruning}, author = {Jensen, David}, booktitle = {Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics}, pages = {295--302}, year = {1997}, editor = {Madigan, David and Smyth, Padhraic}, volume = {R1}, series = {Proceedings of Machine Learning Research}, month = {04--07 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r1/jensen97a/jensen97a.pdf}, url = {https://proceedings.mlr.press/r1/jensen97a.html}, abstract = {Overfitting is a widely observed pathology of induction algorithms. Overfitted models contain unnecessary structure that reflects nothing more than random variation in the data sample used to construct the model. Portions of these models are literally wrong, and can mislead users. Overfitted models are less efficient to store and use than their correctly-sized counterparts. Finally, overfitting can reduce the accuracy of induced models on new data [14, 7]. For induction algorithms that build decision trees [1, 13, 15], pruning is a common approach to correct overfitting. Pruning techniques take an induced tree, examine individual subtrees, and remove those subtrees deemed to be unnecessary. While pruning techniques can differ in several respects, they primarily differ in the criterion used to judge subtrees. Many criteria have been proposed, including statistical significance tests [13], corrected error estimates [15], and minimum description length calculations [12]. Most common pruning techniques, however, do not account for one potentially important factor - multiple testing. Multiple testing occurs whenever an induction algorithm examines several candidate models and selects the one that best accords with the data. Any search process necessarily involves multiple testing, and most common induction algorithms involve implicit or explicit search through a space of candidate models. In the case of decision trees, search involves examining many possible subtrees and selecting the best one. Pruning techniques need to account for the number of subtrees examined, because such multiple testing affects the apparent accuracy of models on training data [8]. This paper examines the importance of adjusting for multiple testing. Specifically, it examines the effectiveness of one particular pruning method - \emph{bonferroni pruning}. Bonferroni pruning adjusts the results of a standard significance test to account for the number of subtrees examined at a particular node of a decision tree. Evidence that bonferroni pruning leads to better models supports the hypothesis that multiple testing is an important cause of overfitting.}, note = {Reissued by PMLR on 30 March 2021.} }
Endnote
%0 Conference Paper %T Adjusting for Multiple Testing in Decision Tree Pruning %A David Jensen %B Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 1997 %E David Madigan %E Padhraic Smyth %F pmlr-vR1-jensen97a %I PMLR %P 295--302 %U https://proceedings.mlr.press/r1/jensen97a.html %V R1 %X Overfitting is a widely observed pathology of induction algorithms. Overfitted models contain unnecessary structure that reflects nothing more than random variation in the data sample used to construct the model. Portions of these models are literally wrong, and can mislead users. Overfitted models are less efficient to store and use than their correctly-sized counterparts. Finally, overfitting can reduce the accuracy of induced models on new data [14, 7]. For induction algorithms that build decision trees [1, 13, 15], pruning is a common approach to correct overfitting. Pruning techniques take an induced tree, examine individual subtrees, and remove those subtrees deemed to be unnecessary. While pruning techniques can differ in several respects, they primarily differ in the criterion used to judge subtrees. Many criteria have been proposed, including statistical significance tests [13], corrected error estimates [15], and minimum description length calculations [12]. Most common pruning techniques, however, do not account for one potentially important factor - multiple testing. Multiple testing occurs whenever an induction algorithm examines several candidate models and selects the one that best accords with the data. Any search process necessarily involves multiple testing, and most common induction algorithms involve implicit or explicit search through a space of candidate models. In the case of decision trees, search involves examining many possible subtrees and selecting the best one. Pruning techniques need to account for the number of subtrees examined, because such multiple testing affects the apparent accuracy of models on training data [8]. This paper examines the importance of adjusting for multiple testing. Specifically, it examines the effectiveness of one particular pruning method - \emph{bonferroni pruning}. Bonferroni pruning adjusts the results of a standard significance test to account for the number of subtrees examined at a particular node of a decision tree. Evidence that bonferroni pruning leads to better models supports the hypothesis that multiple testing is an important cause of overfitting. %Z Reissued by PMLR on 30 March 2021.
APA
Jensen, D.. (1997). Adjusting for Multiple Testing in Decision Tree Pruning. Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R1:295-302 Available from https://proceedings.mlr.press/r1/jensen97a.html. Reissued by PMLR on 30 March 2021.

Related Material