Dropout as a Regularizer of Interaction Effects

Benjamin J. Lengerich, Eric Xing, Rich Caruana
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7550-7564, 2022.

Abstract

We examine Dropout through the perspective of interactions. This view provides a symmetry to explain Dropout: given N variables, there are N choose k possible sets of k variables to form an interaction (i.e. O(N^k)); conversely, the probability an interaction of k variables survives Dropout at rate p is (1-p)^k (decaying with k). These rates effectively cancel, and so Dropout regularizes against higher-order interactions. We prove this perspective analytically and empirically. This perspective of Dropout as a regularizer against interaction effects has several practical implications: (1) higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions, (2) caution should be exercised when interpreting Dropout-based explanations and uncertainty measures, and (3) networks trained with Input Dropout are biased estimators. We also compare Dropout to other regularizers and find that it is difficult to obtain the same selective pressure against high-order interactions with these methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lengerich22a, title = { Dropout as a Regularizer of Interaction Effects }, author = {Lengerich, Benjamin J. and Xing, Eric and Caruana, Rich}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7550--7564}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lengerich22a/lengerich22a.pdf}, url = {https://proceedings.mlr.press/v151/lengerich22a.html}, abstract = { We examine Dropout through the perspective of interactions. This view provides a symmetry to explain Dropout: given N variables, there are N choose k possible sets of k variables to form an interaction (i.e. O(N^k)); conversely, the probability an interaction of k variables survives Dropout at rate p is (1-p)^k (decaying with k). These rates effectively cancel, and so Dropout regularizes against higher-order interactions. We prove this perspective analytically and empirically. This perspective of Dropout as a regularizer against interaction effects has several practical implications: (1) higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions, (2) caution should be exercised when interpreting Dropout-based explanations and uncertainty measures, and (3) networks trained with Input Dropout are biased estimators. We also compare Dropout to other regularizers and find that it is difficult to obtain the same selective pressure against high-order interactions with these methods. } }
Endnote
%0 Conference Paper %T Dropout as a Regularizer of Interaction Effects %A Benjamin J. Lengerich %A Eric Xing %A Rich Caruana %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lengerich22a %I PMLR %P 7550--7564 %U https://proceedings.mlr.press/v151/lengerich22a.html %V 151 %X We examine Dropout through the perspective of interactions. This view provides a symmetry to explain Dropout: given N variables, there are N choose k possible sets of k variables to form an interaction (i.e. O(N^k)); conversely, the probability an interaction of k variables survives Dropout at rate p is (1-p)^k (decaying with k). These rates effectively cancel, and so Dropout regularizes against higher-order interactions. We prove this perspective analytically and empirically. This perspective of Dropout as a regularizer against interaction effects has several practical implications: (1) higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions, (2) caution should be exercised when interpreting Dropout-based explanations and uncertainty measures, and (3) networks trained with Input Dropout are biased estimators. We also compare Dropout to other regularizers and find that it is difficult to obtain the same selective pressure against high-order interactions with these methods.
APA
Lengerich, B.J., Xing, E. & Caruana, R.. (2022). Dropout as a Regularizer of Interaction Effects . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7550-7564 Available from https://proceedings.mlr.press/v151/lengerich22a.html.

Related Material