Tensor Product Neural Networks for Functional ANOVA Model

Seokhun Park, Insung Kong, Yongchan Choi, Chanmoo Park, Yongdai Kim
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:48041-48085, 2025.

Abstract

Interpretability for machine learning models is becoming more and more important as machine learning models become more complex. The functional ANOVA model, which decomposes a high-dimensional function into a sum of lower dimensional functions (commonly referred to as components), is one of the most popular tools for interpretable AI, and recently, various neural networks have been developed for estimating each component in the functional ANOVA model. However, such neural networks are highly unstable when estimating each component since the components themselves are not uniquely defined. That is, there are multiple functional ANOVA decompositions for a given function. In this paper, we propose a novel neural network which guarantees a unique functional ANOVA decomposition and thus is able to estimate each component stably. We call our proposed neural network ANOVA Tensor Product Neural Network (ANOVA-TPNN) since it is motivated by the tensor product basis expansion. Theoretically, we prove that ANOVA-TPNN can approximate any smooth function well. Empirically, we show that ANOVA-TPNN provide much more stable estimation of each component and thus much more stable interpretation when training data and initial values of the model parameters vary than existing neural networks do. Our source code is released at https://github.com/ParkSeokhun/ANOVA-TPNN

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-park25d, title = {Tensor Product Neural Networks for Functional {ANOVA} Model}, author = {Park, Seokhun and Kong, Insung and Choi, Yongchan and Park, Chanmoo and Kim, Yongdai}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {48041--48085}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/park25d/park25d.pdf}, url = {https://proceedings.mlr.press/v267/park25d.html}, abstract = {Interpretability for machine learning models is becoming more and more important as machine learning models become more complex. The functional ANOVA model, which decomposes a high-dimensional function into a sum of lower dimensional functions (commonly referred to as components), is one of the most popular tools for interpretable AI, and recently, various neural networks have been developed for estimating each component in the functional ANOVA model. However, such neural networks are highly unstable when estimating each component since the components themselves are not uniquely defined. That is, there are multiple functional ANOVA decompositions for a given function. In this paper, we propose a novel neural network which guarantees a unique functional ANOVA decomposition and thus is able to estimate each component stably. We call our proposed neural network ANOVA Tensor Product Neural Network (ANOVA-TPNN) since it is motivated by the tensor product basis expansion. Theoretically, we prove that ANOVA-TPNN can approximate any smooth function well. Empirically, we show that ANOVA-TPNN provide much more stable estimation of each component and thus much more stable interpretation when training data and initial values of the model parameters vary than existing neural networks do. Our source code is released at https://github.com/ParkSeokhun/ANOVA-TPNN} }
Endnote
%0 Conference Paper %T Tensor Product Neural Networks for Functional ANOVA Model %A Seokhun Park %A Insung Kong %A Yongchan Choi %A Chanmoo Park %A Yongdai Kim %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-park25d %I PMLR %P 48041--48085 %U https://proceedings.mlr.press/v267/park25d.html %V 267 %X Interpretability for machine learning models is becoming more and more important as machine learning models become more complex. The functional ANOVA model, which decomposes a high-dimensional function into a sum of lower dimensional functions (commonly referred to as components), is one of the most popular tools for interpretable AI, and recently, various neural networks have been developed for estimating each component in the functional ANOVA model. However, such neural networks are highly unstable when estimating each component since the components themselves are not uniquely defined. That is, there are multiple functional ANOVA decompositions for a given function. In this paper, we propose a novel neural network which guarantees a unique functional ANOVA decomposition and thus is able to estimate each component stably. We call our proposed neural network ANOVA Tensor Product Neural Network (ANOVA-TPNN) since it is motivated by the tensor product basis expansion. Theoretically, we prove that ANOVA-TPNN can approximate any smooth function well. Empirically, we show that ANOVA-TPNN provide much more stable estimation of each component and thus much more stable interpretation when training data and initial values of the model parameters vary than existing neural networks do. Our source code is released at https://github.com/ParkSeokhun/ANOVA-TPNN
APA
Park, S., Kong, I., Choi, Y., Park, C. & Kim, Y.. (2025). Tensor Product Neural Networks for Functional ANOVA Model. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:48041-48085 Available from https://proceedings.mlr.press/v267/park25d.html.

Related Material