Periodic signal recovery with regularized sine neural networks

David A. R. Robin, Kevin Scaman, Marc Lelarge
Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 197:98-110, 2023.

Abstract

We consider the problem of learning a periodic one-dimensional signal with neural networks, and designing models that are able to extrapolate the signal well beyond the training window. First, we show that multi-layer perceptrons with ReLU activations are provably unable to perform this task, and lead to poor performance in practice even close to the training window. Then, we propose a novel architecture using sine activation functions along with a well-chosen non-convex regularization, that is able to extrapolate the signal with low error well beyond the training window. Our architecture is several orders of magnitude better than its competitors for distant extrapolation (beyond 100 periods of the signal), while being able to accurately recover the frequency spectrum of the signal in a multi-tone setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v197-robin23a, title = {Periodic signal recovery with regularized sine neural networks}, author = {Robin, David A. R. and Scaman, Kevin and Lelarge, Marc}, booktitle = {Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {98--110}, year = {2023}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Di Bernardo, Arianna and Miolane, Nina}, volume = {197}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v197/robin23a/robin23a.pdf}, url = {https://proceedings.mlr.press/v197/robin23a.html}, abstract = {We consider the problem of learning a periodic one-dimensional signal with neural networks, and designing models that are able to extrapolate the signal well beyond the training window. First, we show that multi-layer perceptrons with ReLU activations are provably unable to perform this task, and lead to poor performance in practice even close to the training window. Then, we propose a novel architecture using sine activation functions along with a well-chosen non-convex regularization, that is able to extrapolate the signal with low error well beyond the training window. Our architecture is several orders of magnitude better than its competitors for distant extrapolation (beyond 100 periods of the signal), while being able to accurately recover the frequency spectrum of the signal in a multi-tone setting.} }
Endnote
%0 Conference Paper %T Periodic signal recovery with regularized sine neural networks %A David A. R. Robin %A Kevin Scaman %A Marc Lelarge %B Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2023 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Arianna Di Bernardo %E Nina Miolane %F pmlr-v197-robin23a %I PMLR %P 98--110 %U https://proceedings.mlr.press/v197/robin23a.html %V 197 %X We consider the problem of learning a periodic one-dimensional signal with neural networks, and designing models that are able to extrapolate the signal well beyond the training window. First, we show that multi-layer perceptrons with ReLU activations are provably unable to perform this task, and lead to poor performance in practice even close to the training window. Then, we propose a novel architecture using sine activation functions along with a well-chosen non-convex regularization, that is able to extrapolate the signal with low error well beyond the training window. Our architecture is several orders of magnitude better than its competitors for distant extrapolation (beyond 100 periods of the signal), while being able to accurately recover the frequency spectrum of the signal in a multi-tone setting.
APA
Robin, D.A.R., Scaman, K. & Lelarge, M.. (2023). Periodic signal recovery with regularized sine neural networks. Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 197:98-110 Available from https://proceedings.mlr.press/v197/robin23a.html.

Related Material