Single Point Transductive Prediction

Nilesh Tripuraneni, Lester Mackey
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9593-9602, 2020.

Abstract

Standard methods in supervised learning separate training and prediction: the model is fit independently of any test points it may encounter. However, can knowledge of the next test point $\mathbf{x}_{\star}$ be exploited to improve prediction accuracy? We address this question in the context of linear prediction, showing how techniques from semi-parametric inference can be used transductively to combat regularization bias. We first lower bound the $\mathbf{x}_{\star}$ prediction error of ridge regression and the Lasso, showing that they must incur significant bias in certain test directions. We then provide non-asymptotic upper bounds on the $\mathbf{x}_{\star}$ prediction error of two transductive prediction rules. We conclude by showing the efficacy of our methods on both synthetic and real data, highlighting the improvements single point transductive prediction can provide in settings with distribution shift.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-tripuraneni20a, title = {Single Point Transductive Prediction}, author = {Tripuraneni, Nilesh and Mackey, Lester}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9593--9602}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/tripuraneni20a/tripuraneni20a.pdf}, url = {https://proceedings.mlr.press/v119/tripuraneni20a.html}, abstract = {Standard methods in supervised learning separate training and prediction: the model is fit independently of any test points it may encounter. However, can knowledge of the next test point $\mathbf{x}_{\star}$ be exploited to improve prediction accuracy? We address this question in the context of linear prediction, showing how techniques from semi-parametric inference can be used transductively to combat regularization bias. We first lower bound the $\mathbf{x}_{\star}$ prediction error of ridge regression and the Lasso, showing that they must incur significant bias in certain test directions. We then provide non-asymptotic upper bounds on the $\mathbf{x}_{\star}$ prediction error of two transductive prediction rules. We conclude by showing the efficacy of our methods on both synthetic and real data, highlighting the improvements single point transductive prediction can provide in settings with distribution shift.} }
Endnote
%0 Conference Paper %T Single Point Transductive Prediction %A Nilesh Tripuraneni %A Lester Mackey %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-tripuraneni20a %I PMLR %P 9593--9602 %U https://proceedings.mlr.press/v119/tripuraneni20a.html %V 119 %X Standard methods in supervised learning separate training and prediction: the model is fit independently of any test points it may encounter. However, can knowledge of the next test point $\mathbf{x}_{\star}$ be exploited to improve prediction accuracy? We address this question in the context of linear prediction, showing how techniques from semi-parametric inference can be used transductively to combat regularization bias. We first lower bound the $\mathbf{x}_{\star}$ prediction error of ridge regression and the Lasso, showing that they must incur significant bias in certain test directions. We then provide non-asymptotic upper bounds on the $\mathbf{x}_{\star}$ prediction error of two transductive prediction rules. We conclude by showing the efficacy of our methods on both synthetic and real data, highlighting the improvements single point transductive prediction can provide in settings with distribution shift.
APA
Tripuraneni, N. & Mackey, L.. (2020). Single Point Transductive Prediction. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9593-9602 Available from https://proceedings.mlr.press/v119/tripuraneni20a.html.

Related Material