Approximation Capabilities of Neural ODEs and Invertible Residual Networks

Han Zhang, Xi Gao, Jacob Unterman, Tom Arodz
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11086-11095, 2020.

Abstract

Recent interest in invertible models and normalizing flows has resulted in new architectures that ensure invertibility of the network model. Neural ODEs and i-ResNets are two recent techniques for constructing models that are invertible, but it is unclear if they can be used to approximate any continuous invertible mapping. Here, we show that out of the box, both of these architectures are limited in their approximation capabilities. We then show how to overcome this limitation: we prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE or an i-ResNet operating on a $2p$-dimensional Euclidean space. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20h, title = {Approximation Capabilities of Neural {ODE}s and Invertible Residual Networks}, author = {Zhang, Han and Gao, Xi and Unterman, Jacob and Arodz, Tom}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11086--11095}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20h/zhang20h.pdf}, url = {http://proceedings.mlr.press/v119/zhang20h.html}, abstract = {Recent interest in invertible models and normalizing flows has resulted in new architectures that ensure invertibility of the network model. Neural ODEs and i-ResNets are two recent techniques for constructing models that are invertible, but it is unclear if they can be used to approximate any continuous invertible mapping. Here, we show that out of the box, both of these architectures are limited in their approximation capabilities. We then show how to overcome this limitation: we prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE or an i-ResNet operating on a $2p$-dimensional Euclidean space. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions.} }
Endnote
%0 Conference Paper %T Approximation Capabilities of Neural ODEs and Invertible Residual Networks %A Han Zhang %A Xi Gao %A Jacob Unterman %A Tom Arodz %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20h %I PMLR %P 11086--11095 %U http://proceedings.mlr.press/v119/zhang20h.html %V 119 %X Recent interest in invertible models and normalizing flows has resulted in new architectures that ensure invertibility of the network model. Neural ODEs and i-ResNets are two recent techniques for constructing models that are invertible, but it is unclear if they can be used to approximate any continuous invertible mapping. Here, we show that out of the box, both of these architectures are limited in their approximation capabilities. We then show how to overcome this limitation: we prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE or an i-ResNet operating on a $2p$-dimensional Euclidean space. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions.
APA
Zhang, H., Gao, X., Unterman, J. & Arodz, T.. (2020). Approximation Capabilities of Neural ODEs and Invertible Residual Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11086-11095 Available from http://proceedings.mlr.press/v119/zhang20h.html.

Related Material