Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky’s Theorem

Clayton H. Sanford, Vaggos Chatziafratis
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9505-9549, 2022.

Abstract

Given a target function $f$, how large must a neural network be in order to approximate $f$? Recent works examine this basic question on neural network expressivity from the lens of dynamical systems and provide novel “depth-vs-width” tradeoffs for a large family of functions $f$. They suggest that such tradeoffs are governed by the existence of periodic points or cycles in $f$. Our work, by further deploying dynamical systems concepts, illuminates a more subtle connection between periodicity and expressivity: we prove that periodic points alone lead to suboptimal depth-width tradeoffs and we improve upon them by demonstrating that certain “chaotic itineraries” give stronger exponential tradeoffs, even in regimes where previous analyses only imply polynomial gaps. Contrary to prior works, our bounds are nearly-optimal, tighten as the period increases, and handle strong notions of inapproximability (e.g., constant $L_1$ error). More broadly, we identify a phase transition to the chaotic regime that exactly coincides with an abrupt shift in other notions of function complexity, including VC-dimension and topological entropy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-sanford22a, title = { Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky’s Theorem }, author = {Sanford, Clayton H. and Chatziafratis, Vaggos}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9505--9549}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/sanford22a/sanford22a.pdf}, url = {https://proceedings.mlr.press/v151/sanford22a.html}, abstract = { Given a target function $f$, how large must a neural network be in order to approximate $f$? Recent works examine this basic question on neural network expressivity from the lens of dynamical systems and provide novel “depth-vs-width” tradeoffs for a large family of functions $f$. They suggest that such tradeoffs are governed by the existence of periodic points or cycles in $f$. Our work, by further deploying dynamical systems concepts, illuminates a more subtle connection between periodicity and expressivity: we prove that periodic points alone lead to suboptimal depth-width tradeoffs and we improve upon them by demonstrating that certain “chaotic itineraries” give stronger exponential tradeoffs, even in regimes where previous analyses only imply polynomial gaps. Contrary to prior works, our bounds are nearly-optimal, tighten as the period increases, and handle strong notions of inapproximability (e.g., constant $L_1$ error). More broadly, we identify a phase transition to the chaotic regime that exactly coincides with an abrupt shift in other notions of function complexity, including VC-dimension and topological entropy. } }
Endnote
%0 Conference Paper %T Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky’s Theorem %A Clayton H. Sanford %A Vaggos Chatziafratis %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-sanford22a %I PMLR %P 9505--9549 %U https://proceedings.mlr.press/v151/sanford22a.html %V 151 %X Given a target function $f$, how large must a neural network be in order to approximate $f$? Recent works examine this basic question on neural network expressivity from the lens of dynamical systems and provide novel “depth-vs-width” tradeoffs for a large family of functions $f$. They suggest that such tradeoffs are governed by the existence of periodic points or cycles in $f$. Our work, by further deploying dynamical systems concepts, illuminates a more subtle connection between periodicity and expressivity: we prove that periodic points alone lead to suboptimal depth-width tradeoffs and we improve upon them by demonstrating that certain “chaotic itineraries” give stronger exponential tradeoffs, even in regimes where previous analyses only imply polynomial gaps. Contrary to prior works, our bounds are nearly-optimal, tighten as the period increases, and handle strong notions of inapproximability (e.g., constant $L_1$ error). More broadly, we identify a phase transition to the chaotic regime that exactly coincides with an abrupt shift in other notions of function complexity, including VC-dimension and topological entropy.
APA
Sanford, C.H. & Chatziafratis, V.. (2022). Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky’s Theorem . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9505-9549 Available from https://proceedings.mlr.press/v151/sanford22a.html.

Related Material