A sampling theory perspective on activations for implicit neural representations

Hemanth Saratchandran, Sameera Ramasinghe, Violetta Shevchenko, Alexander Long, Simon Lucey
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43422-43444, 2024.

Abstract

Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities. While commonly using techniques like Fourier positional encodings or non-traditional activation functions (e.g., Gaussian, sinusoid, or wavelets) to capture high-frequency content, their properties lack exploration within a unified theoretical framework. Addressing this gap, we conduct a comprehensive analysis of these activations from a sampling theory perspective. Our investigation reveals that, especially in shallow INRs, $\mathrm{sinc}$ activations—previously unused in conjunction with INRs—are theoretically optimal for signal encoding. Additionally, we establish a connection between dynamical systems and INRs, leveraging sampling theory to bridge these two paradigms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-saratchandran24a, title = {A sampling theory perspective on activations for implicit neural representations}, author = {Saratchandran, Hemanth and Ramasinghe, Sameera and Shevchenko, Violetta and Long, Alexander and Lucey, Simon}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43422--43444}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/saratchandran24a/saratchandran24a.pdf}, url = {https://proceedings.mlr.press/v235/saratchandran24a.html}, abstract = {Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities. While commonly using techniques like Fourier positional encodings or non-traditional activation functions (e.g., Gaussian, sinusoid, or wavelets) to capture high-frequency content, their properties lack exploration within a unified theoretical framework. Addressing this gap, we conduct a comprehensive analysis of these activations from a sampling theory perspective. Our investigation reveals that, especially in shallow INRs, $\mathrm{sinc}$ activations—previously unused in conjunction with INRs—are theoretically optimal for signal encoding. Additionally, we establish a connection between dynamical systems and INRs, leveraging sampling theory to bridge these two paradigms.} }
Endnote
%0 Conference Paper %T A sampling theory perspective on activations for implicit neural representations %A Hemanth Saratchandran %A Sameera Ramasinghe %A Violetta Shevchenko %A Alexander Long %A Simon Lucey %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-saratchandran24a %I PMLR %P 43422--43444 %U https://proceedings.mlr.press/v235/saratchandran24a.html %V 235 %X Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities. While commonly using techniques like Fourier positional encodings or non-traditional activation functions (e.g., Gaussian, sinusoid, or wavelets) to capture high-frequency content, their properties lack exploration within a unified theoretical framework. Addressing this gap, we conduct a comprehensive analysis of these activations from a sampling theory perspective. Our investigation reveals that, especially in shallow INRs, $\mathrm{sinc}$ activations—previously unused in conjunction with INRs—are theoretically optimal for signal encoding. Additionally, we establish a connection between dynamical systems and INRs, leveraging sampling theory to bridge these two paradigms.
APA
Saratchandran, H., Ramasinghe, S., Shevchenko, V., Long, A. & Lucey, S.. (2024). A sampling theory perspective on activations for implicit neural representations. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43422-43444 Available from https://proceedings.mlr.press/v235/saratchandran24a.html.

Related Material