On the Statistical Efficiency of Compositional Nonparametric Prediction

Yixi Xu, Jean Honorio, Xiao Wang
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1531-1539, 2018.

Abstract

In this paper, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is either a summation, a multiplication, or the application of one of the $q$ basis functions to one of the $p$ covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is $O(k\log(pq)+\log(k!))$, and the necessary number of samples is $Ω(k\log (pq)-\log(k!))$. We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-xu18f, title = {On the Statistical Efficiency of Compositional Nonparametric Prediction}, author = {Xu, Yixi and Honorio, Jean and Wang, Xiao}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1531--1539}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/xu18f/xu18f.pdf}, url = {https://proceedings.mlr.press/v84/xu18f.html}, abstract = {In this paper, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is either a summation, a multiplication, or the application of one of the $q$ basis functions to one of the $p$ covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is $O(k\log(pq)+\log(k!))$, and the necessary number of samples is $Ω(k\log (pq)-\log(k!))$. We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.} }
Endnote
%0 Conference Paper %T On the Statistical Efficiency of Compositional Nonparametric Prediction %A Yixi Xu %A Jean Honorio %A Xiao Wang %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-xu18f %I PMLR %P 1531--1539 %U https://proceedings.mlr.press/v84/xu18f.html %V 84 %X In this paper, we propose a compositional nonparametric method in which a model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is either a summation, a multiplication, or the application of one of the $q$ basis functions to one of the $p$ covariates. We show that in order to recover a labeled binary tree from a given dataset, the sufficient number of samples is $O(k\log(pq)+\log(k!))$, and the necessary number of samples is $Ω(k\log (pq)-\log(k!))$. We further propose a greedy algorithm for regression in order to validate our theoretical findings through synthetic experiments.
APA
Xu, Y., Honorio, J. & Wang, X.. (2018). On the Statistical Efficiency of Compositional Nonparametric Prediction. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1531-1539 Available from https://proceedings.mlr.press/v84/xu18f.html.

Related Material