Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks

Sho Sonoda, Hideyuki Ishi, Isao Ishikawa, Masahiro Ikeda
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:129-144, 2024.

Abstract

The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the \emph{ridgelet transform}, from a \emph{joint group invariant function} on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the \emph{encoding rule}, and (2) it implies the \emph{universality} of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur’s lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal \emph{deep} networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-sonoda24a, title = {Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks}, author = {Sonoda, Sho and Ishi, Hideyuki and Ishikawa, Isao and Ikeda, Masahiro}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {129--144}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/sonoda24a/sonoda24a.pdf}, url = {https://proceedings.mlr.press/v228/sonoda24a.html}, abstract = {The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the \emph{ridgelet transform}, from a \emph{joint group invariant function} on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the \emph{encoding rule}, and (2) it implies the \emph{universality} of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur’s lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal \emph{deep} networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis.} }
Endnote
%0 Conference Paper %T Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks %A Sho Sonoda %A Hideyuki Ishi %A Isao Ishikawa %A Masahiro Ikeda %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-sonoda24a %I PMLR %P 129--144 %U https://proceedings.mlr.press/v228/sonoda24a.html %V 228 %X The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated. In this study, we present a systematic method to induce a generalized neural network and its right inverse operator, called the \emph{ridgelet transform}, from a \emph{joint group invariant function} on the data-parameter domain. Since the ridgelet transform is an inverse, (1) it can describe the arrangement of parameters for the network to represent a target function, which is understood as the \emph{encoding rule}, and (2) it implies the \emph{universality} of the network. Based on the group representation theory, we present a new simple proof of the universality by using Schur’s lemma in a unified manner covering a wide class of networks, for example, the original ridgelet transform, formal \emph{deep} networks, and the dual voice transform. Since traditional universality theorems were demonstrated based on functional analysis, this study sheds light on the group theoretic aspect of the approximation theory, connecting geometric deep learning to abstract harmonic analysis.
APA
Sonoda, S., Ishi, H., Ishikawa, I. & Ikeda, M.. (2024). Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:129-144 Available from https://proceedings.mlr.press/v228/sonoda24a.html.

Related Material