Learning Input Encodings for Kernel-Optimal Implicit Neural Representations

Zhemin Li, Liyuan Ma, Hongxia Wang, Yaoyun Zeng, Xiaolong Han
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:35620-35636, 2025.

Abstract

Implicit Neural Representations (INRs) rely heavily on architectural choices for good generalization. Developing theoretically grounded approaches for architecture design remains an active area of research. Via theoretical analysis of the infinite-width limit, we establish a methodology that characterizes INR’s generalization by means of kernel alignment. We first formulate the optimal kernel that minimizes pointwise expected squared error, then demonstrate that the Neural Tangent Kernel of the composed function (INR with input encoding) can approximate any positive semidefinite dot-product kernels through input feature mapping adjustments. Building upon these insights, we propose a Kernel Alignment Regularizer (KAR) that naturally integrates with existing INR systems to enhance kernel alignment. We further develop Plug-in Encoding for Aligned Kernels (PEAK) to refine INR models with KAR using learnable input encoding. This work contributes to the ongoing research efforts in bridging theory and practice for principled INR architecture design. Code is available at https://github.com/lizhemin15/KAR.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25br, title = {Learning Input Encodings for Kernel-Optimal Implicit Neural Representations}, author = {Li, Zhemin and Ma, Liyuan and Wang, Hongxia and Zeng, Yaoyun and Han, Xiaolong}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {35620--35636}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25br/li25br.pdf}, url = {https://proceedings.mlr.press/v267/li25br.html}, abstract = {Implicit Neural Representations (INRs) rely heavily on architectural choices for good generalization. Developing theoretically grounded approaches for architecture design remains an active area of research. Via theoretical analysis of the infinite-width limit, we establish a methodology that characterizes INR’s generalization by means of kernel alignment. We first formulate the optimal kernel that minimizes pointwise expected squared error, then demonstrate that the Neural Tangent Kernel of the composed function (INR with input encoding) can approximate any positive semidefinite dot-product kernels through input feature mapping adjustments. Building upon these insights, we propose a Kernel Alignment Regularizer (KAR) that naturally integrates with existing INR systems to enhance kernel alignment. We further develop Plug-in Encoding for Aligned Kernels (PEAK) to refine INR models with KAR using learnable input encoding. This work contributes to the ongoing research efforts in bridging theory and practice for principled INR architecture design. Code is available at https://github.com/lizhemin15/KAR.} }
Endnote
%0 Conference Paper %T Learning Input Encodings for Kernel-Optimal Implicit Neural Representations %A Zhemin Li %A Liyuan Ma %A Hongxia Wang %A Yaoyun Zeng %A Xiaolong Han %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25br %I PMLR %P 35620--35636 %U https://proceedings.mlr.press/v267/li25br.html %V 267 %X Implicit Neural Representations (INRs) rely heavily on architectural choices for good generalization. Developing theoretically grounded approaches for architecture design remains an active area of research. Via theoretical analysis of the infinite-width limit, we establish a methodology that characterizes INR’s generalization by means of kernel alignment. We first formulate the optimal kernel that minimizes pointwise expected squared error, then demonstrate that the Neural Tangent Kernel of the composed function (INR with input encoding) can approximate any positive semidefinite dot-product kernels through input feature mapping adjustments. Building upon these insights, we propose a Kernel Alignment Regularizer (KAR) that naturally integrates with existing INR systems to enhance kernel alignment. We further develop Plug-in Encoding for Aligned Kernels (PEAK) to refine INR models with KAR using learnable input encoding. This work contributes to the ongoing research efforts in bridging theory and practice for principled INR architecture design. Code is available at https://github.com/lizhemin15/KAR.
APA
Li, Z., Ma, L., Wang, H., Zeng, Y. & Han, X.. (2025). Learning Input Encodings for Kernel-Optimal Implicit Neural Representations. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:35620-35636 Available from https://proceedings.mlr.press/v267/li25br.html.

Related Material