Adaptively Partitioning Max-Affine Estimators for Convex Regression

Gábor Balázs
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:860-874, 2022.

Abstract

This paper considers convex shape-restricted nonparametric regression over subgaussian domain and noise with the squared loss. It introduces a tractable convex piecewise-linear estimator which precomputes a partition of the training data by an adaptive version of farthest-point clustering, approximately fits hyperplanes over the partition cells by minimizing the regularized empirical risk, and projects the result into the max-affine class. The analysis provides an upper bound on the generalization error of this estimator matching the rate of Lipschitz nonparametric regression and proves its adaptivity to the intrinsic dimension of the data mitigating the effect of the curse of dimensionality. The experiments conclude with competitive performance, improved overfitting robustness, and significant computational savings compared to existing convex regression methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-balazs22a, title = { Adaptively Partitioning Max-Affine Estimators for Convex Regression }, author = {Bal\'azs, G\'abor}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {860--874}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/balazs22a/balazs22a.pdf}, url = {https://proceedings.mlr.press/v151/balazs22a.html}, abstract = { This paper considers convex shape-restricted nonparametric regression over subgaussian domain and noise with the squared loss. It introduces a tractable convex piecewise-linear estimator which precomputes a partition of the training data by an adaptive version of farthest-point clustering, approximately fits hyperplanes over the partition cells by minimizing the regularized empirical risk, and projects the result into the max-affine class. The analysis provides an upper bound on the generalization error of this estimator matching the rate of Lipschitz nonparametric regression and proves its adaptivity to the intrinsic dimension of the data mitigating the effect of the curse of dimensionality. The experiments conclude with competitive performance, improved overfitting robustness, and significant computational savings compared to existing convex regression methods. } }
Endnote
%0 Conference Paper %T Adaptively Partitioning Max-Affine Estimators for Convex Regression %A Gábor Balázs %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-balazs22a %I PMLR %P 860--874 %U https://proceedings.mlr.press/v151/balazs22a.html %V 151 %X This paper considers convex shape-restricted nonparametric regression over subgaussian domain and noise with the squared loss. It introduces a tractable convex piecewise-linear estimator which precomputes a partition of the training data by an adaptive version of farthest-point clustering, approximately fits hyperplanes over the partition cells by minimizing the regularized empirical risk, and projects the result into the max-affine class. The analysis provides an upper bound on the generalization error of this estimator matching the rate of Lipschitz nonparametric regression and proves its adaptivity to the intrinsic dimension of the data mitigating the effect of the curse of dimensionality. The experiments conclude with competitive performance, improved overfitting robustness, and significant computational savings compared to existing convex regression methods.
APA
Balázs, G.. (2022). Adaptively Partitioning Max-Affine Estimators for Convex Regression . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:860-874 Available from https://proceedings.mlr.press/v151/balazs22a.html.

Related Material