Surprising properties of dropout in deep networks

David P. Helmbold, Philip M. Long
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1123-1146, 2017.

Abstract

We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-helmbold17a, title = {Surprising properties of dropout in deep networks}, author = {Helmbold, David P. and Long, Philip M.}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {1123--1146}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/helmbold17a/helmbold17a.pdf}, url = {https://proceedings.mlr.press/v65/helmbold17a.html}, abstract = {We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.} }
Endnote
%0 Conference Paper %T Surprising properties of dropout in deep networks %A David P. Helmbold %A Philip M. Long %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-helmbold17a %I PMLR %P 1123--1146 %U https://proceedings.mlr.press/v65/helmbold17a.html %V 65 %X We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.
APA
Helmbold, D.P. & Long, P.M.. (2017). Surprising properties of dropout in deep networks. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:1123-1146 Available from https://proceedings.mlr.press/v65/helmbold17a.html.

Related Material