Avoiding pathologies in very deep networks

David Duvenaud, Oren Rippel, Ryan Adams, Zoubin Ghahramani
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:202-210, 2014.

Abstract

Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-duvenaud14, title = {{Avoiding pathologies in very deep networks}}, author = {Duvenaud, David and Rippel, Oren and Adams, Ryan and Ghahramani, Zoubin}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {202--210}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/duvenaud14.pdf}, url = {https://proceedings.mlr.press/v33/duvenaud14.html}, abstract = {Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.} }
Endnote
%0 Conference Paper %T Avoiding pathologies in very deep networks %A David Duvenaud %A Oren Rippel %A Ryan Adams %A Zoubin Ghahramani %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-duvenaud14 %I PMLR %P 202--210 %U https://proceedings.mlr.press/v33/duvenaud14.html %V 33 %X Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.
RIS
TY - CPAPER TI - Avoiding pathologies in very deep networks AU - David Duvenaud AU - Oren Rippel AU - Ryan Adams AU - Zoubin Ghahramani BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-duvenaud14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 202 EP - 210 L1 - http://proceedings.mlr.press/v33/duvenaud14.pdf UR - https://proceedings.mlr.press/v33/duvenaud14.html AB - Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes. ER -
APA
Duvenaud, D., Rippel, O., Adams, R. & Ghahramani, Z.. (2014). Avoiding pathologies in very deep networks. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:202-210 Available from https://proceedings.mlr.press/v33/duvenaud14.html.

Related Material