Finding Nearly Everything within Random Binary Networks

Kartik Sreenivasan, Shashank Rajput, Jy-Yong Sohn, Dimitris Papailiopoulos
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3531-3541, 2022.

Abstract

A recent work by Ramanujan et al. (2020) provides significant empirical evidence that sufficiently overparameterized, random neural networks contain untrained subnetworks that achieve state-of-the-art accuracy on several predictive tasks. A follow-up line of theoretical work provides justification of these findings by proving that slightly overparameterized neural networks, with commonly used continuous-valued random initializations can indeed be pruned to approximate any target network. In this work, we show that the amplitude of those random weights does not even matter. We prove that any target network of width $d$ and depth $l$ can be approximated up to arbitrary accuracy $\varepsilon$ by simply pruning a random network of binary $\{\pm1\}$ weights that is wider and deeper than the target network only by a polylogarithmic factor of $d, l$ and $\varepsilon$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-sreenivasan22a, title = { Finding Nearly Everything within Random Binary Networks }, author = {Sreenivasan, Kartik and Rajput, Shashank and Sohn, Jy-Yong and Papailiopoulos, Dimitris}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3531--3541}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/sreenivasan22a/sreenivasan22a.pdf}, url = {https://proceedings.mlr.press/v151/sreenivasan22a.html}, abstract = { A recent work by Ramanujan et al. (2020) provides significant empirical evidence that sufficiently overparameterized, random neural networks contain untrained subnetworks that achieve state-of-the-art accuracy on several predictive tasks. A follow-up line of theoretical work provides justification of these findings by proving that slightly overparameterized neural networks, with commonly used continuous-valued random initializations can indeed be pruned to approximate any target network. In this work, we show that the amplitude of those random weights does not even matter. We prove that any target network of width $d$ and depth $l$ can be approximated up to arbitrary accuracy $\varepsilon$ by simply pruning a random network of binary $\{\pm1\}$ weights that is wider and deeper than the target network only by a polylogarithmic factor of $d, l$ and $\varepsilon$. } }
Endnote
%0 Conference Paper %T Finding Nearly Everything within Random Binary Networks %A Kartik Sreenivasan %A Shashank Rajput %A Jy-Yong Sohn %A Dimitris Papailiopoulos %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-sreenivasan22a %I PMLR %P 3531--3541 %U https://proceedings.mlr.press/v151/sreenivasan22a.html %V 151 %X A recent work by Ramanujan et al. (2020) provides significant empirical evidence that sufficiently overparameterized, random neural networks contain untrained subnetworks that achieve state-of-the-art accuracy on several predictive tasks. A follow-up line of theoretical work provides justification of these findings by proving that slightly overparameterized neural networks, with commonly used continuous-valued random initializations can indeed be pruned to approximate any target network. In this work, we show that the amplitude of those random weights does not even matter. We prove that any target network of width $d$ and depth $l$ can be approximated up to arbitrary accuracy $\varepsilon$ by simply pruning a random network of binary $\{\pm1\}$ weights that is wider and deeper than the target network only by a polylogarithmic factor of $d, l$ and $\varepsilon$.
APA
Sreenivasan, K., Rajput, S., Sohn, J. & Papailiopoulos, D.. (2022). Finding Nearly Everything within Random Binary Networks . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3531-3541 Available from https://proceedings.mlr.press/v151/sreenivasan22a.html.

Related Material