Learning Polynomial Activation Functions for Deep Neural Networks

Linghao Zhang, Jiawang Nie, Tingting Tang
Proceedings of the 1st Conference on Topology, Algebra, and Geometry in Data Science(TAG-DS 2025), PMLR 321:90-99, 2026.

Abstract

Activation functions are crucial for deep neural networks. This novel work frames the problem of training neural network with learnable polynomial activation functions as a polynomial optimization problem, which is solvable by the Moment-SOS hierarchy. This work represents a fundamental departure from the conventional paradigm of training deep neural networks, which relies on local optimization methods like backpropagation and gradient descent. Numerical experiments are presented to demonstrate the accuracy and robustness of optimum parameter recovery in presence of noises.

Cite this Paper


BibTeX
@InProceedings{pmlr-v321-zhang26a, title = {Learning Polynomial Activation Functions for Deep Neural Networks}, author = {Zhang, Linghao and Nie, Jiawang and Tang, Tingting}, booktitle = {Proceedings of the 1st Conference on Topology, Algebra, and Geometry in Data Science(TAG-DS 2025)}, pages = {90--99}, year = {2026}, editor = {Bernardez Gil, Guillermo and Black, Mitchell and Cloninger, Alexander and Doster, Timothy and Emerson, Tegan and Garcı́a-Rodondo, Ińes and Holtz, Chester and Kotak, Mit and Kvinge, Henry and Mishne, Gal and Papillon, Mathilde and Pouplin, Alison and Rainey, Katie and Rieck, Bastian and Telyatnikov, Lev and Yeats, Eric and Wang, Qingsong and Wang, Yusu and Wayland, Jeremy}, volume = {321}, series = {Proceedings of Machine Learning Research}, month = {01--02 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v321/main/assets/zhang26a/zhang26a.pdf}, url = {https://proceedings.mlr.press/v321/zhang26a.html}, abstract = {Activation functions are crucial for deep neural networks. This novel work frames the problem of training neural network with learnable polynomial activation functions as a polynomial optimization problem, which is solvable by the Moment-SOS hierarchy. This work represents a fundamental departure from the conventional paradigm of training deep neural networks, which relies on local optimization methods like backpropagation and gradient descent. Numerical experiments are presented to demonstrate the accuracy and robustness of optimum parameter recovery in presence of noises.} }
Endnote
%0 Conference Paper %T Learning Polynomial Activation Functions for Deep Neural Networks %A Linghao Zhang %A Jiawang Nie %A Tingting Tang %B Proceedings of the 1st Conference on Topology, Algebra, and Geometry in Data Science(TAG-DS 2025) %C Proceedings of Machine Learning Research %D 2026 %E Guillermo Bernardez Gil %E Mitchell Black %E Alexander Cloninger %E Timothy Doster %E Tegan Emerson %E Ińes Garcı́a-Rodondo %E Chester Holtz %E Mit Kotak %E Henry Kvinge %E Gal Mishne %E Mathilde Papillon %E Alison Pouplin %E Katie Rainey %E Bastian Rieck %E Lev Telyatnikov %E Eric Yeats %E Qingsong Wang %E Yusu Wang %E Jeremy Wayland %F pmlr-v321-zhang26a %I PMLR %P 90--99 %U https://proceedings.mlr.press/v321/zhang26a.html %V 321 %X Activation functions are crucial for deep neural networks. This novel work frames the problem of training neural network with learnable polynomial activation functions as a polynomial optimization problem, which is solvable by the Moment-SOS hierarchy. This work represents a fundamental departure from the conventional paradigm of training deep neural networks, which relies on local optimization methods like backpropagation and gradient descent. Numerical experiments are presented to demonstrate the accuracy and robustness of optimum parameter recovery in presence of noises.
APA
Zhang, L., Nie, J. & Tang, T.. (2026). Learning Polynomial Activation Functions for Deep Neural Networks. Proceedings of the 1st Conference on Topology, Algebra, and Geometry in Data Science(TAG-DS 2025), in Proceedings of Machine Learning Research 321:90-99 Available from https://proceedings.mlr.press/v321/zhang26a.html.

Related Material