Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness

Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2291-2318, 2023.

Abstract

In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. Recently, many flexible machine learning methods have been developed for instrumental variable estimation. However, these methods have at least one of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) only obtaining estimation error rates in terms weak metrics (e.g., projected norm) rather than strong metrics (e.g., L_2 norm); or (3) imposing the so-called closedness condition that requires a certain conditional expectation operator to be sufficiently smooth. In this paper, we present the first method and analysis that can avoid all three limitations, while still permitting general function approximation. Specifically, we propose a new penalized minimax estimator that can converge to a fixed IV solution even when there are multiple solutions, and we derive a strong L_2 error rate for our estimator under lax conditions. Notably, this guarantee only needs a widely-used source condition and realizability assumptions, but not the so-called closedness condition. We argue that the source condition and the closedness condition are inherently conflicting, so relaxing the latter significantly improves upon the existing literature that requires both conditions. Our estimator can achieve this improvement because it builds on a novel formulation of the IV estimation problem as a constrained optimization problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-bennett23b, title = {Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness}, author = {Bennett, Andrew and Kallus, Nathan and Mao, Xiaojie and Newey, Whitney and Syrgkanis, Vasilis and Uehara, Masatoshi}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2291--2318}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/bennett23b/bennett23b.pdf}, url = {https://proceedings.mlr.press/v195/bennett23b.html}, abstract = {In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. Recently, many flexible machine learning methods have been developed for instrumental variable estimation. However, these methods have at least one of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) only obtaining estimation error rates in terms weak metrics (e.g., projected norm) rather than strong metrics (e.g., L_2 norm); or (3) imposing the so-called closedness condition that requires a certain conditional expectation operator to be sufficiently smooth. In this paper, we present the first method and analysis that can avoid all three limitations, while still permitting general function approximation. Specifically, we propose a new penalized minimax estimator that can converge to a fixed IV solution even when there are multiple solutions, and we derive a strong L_2 error rate for our estimator under lax conditions. Notably, this guarantee only needs a widely-used source condition and realizability assumptions, but not the so-called closedness condition. We argue that the source condition and the closedness condition are inherently conflicting, so relaxing the latter significantly improves upon the existing literature that requires both conditions. Our estimator can achieve this improvement because it builds on a novel formulation of the IV estimation problem as a constrained optimization problem.} }
Endnote
%0 Conference Paper %T Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness %A Andrew Bennett %A Nathan Kallus %A Xiaojie Mao %A Whitney Newey %A Vasilis Syrgkanis %A Masatoshi Uehara %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-bennett23b %I PMLR %P 2291--2318 %U https://proceedings.mlr.press/v195/bennett23b.html %V 195 %X In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. Recently, many flexible machine learning methods have been developed for instrumental variable estimation. However, these methods have at least one of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) only obtaining estimation error rates in terms weak metrics (e.g., projected norm) rather than strong metrics (e.g., L_2 norm); or (3) imposing the so-called closedness condition that requires a certain conditional expectation operator to be sufficiently smooth. In this paper, we present the first method and analysis that can avoid all three limitations, while still permitting general function approximation. Specifically, we propose a new penalized minimax estimator that can converge to a fixed IV solution even when there are multiple solutions, and we derive a strong L_2 error rate for our estimator under lax conditions. Notably, this guarantee only needs a widely-used source condition and realizability assumptions, but not the so-called closedness condition. We argue that the source condition and the closedness condition are inherently conflicting, so relaxing the latter significantly improves upon the existing literature that requires both conditions. Our estimator can achieve this improvement because it builds on a novel formulation of the IV estimation problem as a constrained optimization problem.
APA
Bennett, A., Kallus, N., Mao, X., Newey, W., Syrgkanis, V. & Uehara, M.. (2023). Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2291-2318 Available from https://proceedings.mlr.press/v195/bennett23b.html.

Related Material