Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening (Extended Abstract)

Alkis Kalavasis, Anay Mehrotra, Manolis Zampetakis
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2767-2767, 2024.

Abstract

Inverse propensity-score weighted (IPW) estimators are prevalent in causal inference for estimating average treatment effects in observational studies. Under unconfoundedness, given accurate propensity scores and $n$ samples, the size of confidence intervals of IPW estimators scales down with $n$, and, several of their variants improve the rate of scaling. However, neither IPW estimators nor their variants are robust to inaccuracies: even if a single covariate has an $\epsilon>0$ additive error in the propensity score, the size of confidence intervals of these estimators can increase arbitrarily. Moreover, even without errors, the rate with which the confidence intervals of these estimators go to zero with $n$ can be arbitrarily slow in the presence of extreme propensity scores (those close to 0 or 1). We introduce a family of Coarse IPW (CIPW) estimators that captures existing IPW estimators and their variants. Each CIPW estimator is an IPW estimator on a coarsened covariate space, where certain covariates are merged. Under mild assumptions, e.g., Lipschitzness in expected outcomes and sparsity of extreme propensity scores, we give an efficient algorithm to find a robust estimator: given $\epsilon$-inaccurate propensity scores and $n$ samples, its confidence interval size scales with $\epsilon+(1/\sqrt{n})$. In contrast, under the same assumptions, existing estimators’ confidence interval sizes are $\Omega(1)$ irrespective of $\epsilon$ and $n$. Crucially, our estimator is data-dependent and we show that no data-independent CIPW estimator can be robust to inaccuracies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-kalavasis24a, title = {Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening (Extended Abstract)}, author = {Kalavasis, Alkis and Mehrotra, Anay and Zampetakis, Manolis}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {2767--2767}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/kalavasis24a/kalavasis24a.pdf}, url = {https://proceedings.mlr.press/v247/kalavasis24a.html}, abstract = {Inverse propensity-score weighted (IPW) estimators are prevalent in causal inference for estimating average treatment effects in observational studies. Under unconfoundedness, given accurate propensity scores and $n$ samples, the size of confidence intervals of IPW estimators scales down with $n$, and, several of their variants improve the rate of scaling. However, neither IPW estimators nor their variants are robust to inaccuracies: even if a single covariate has an $\epsilon>0$ additive error in the propensity score, the size of confidence intervals of these estimators can increase arbitrarily. Moreover, even without errors, the rate with which the confidence intervals of these estimators go to zero with $n$ can be arbitrarily slow in the presence of extreme propensity scores (those close to 0 or 1). We introduce a family of Coarse IPW (CIPW) estimators that captures existing IPW estimators and their variants. Each CIPW estimator is an IPW estimator on a coarsened covariate space, where certain covariates are merged. Under mild assumptions, e.g., Lipschitzness in expected outcomes and sparsity of extreme propensity scores, we give an efficient algorithm to find a robust estimator: given $\epsilon$-inaccurate propensity scores and $n$ samples, its confidence interval size scales with $\epsilon+(1/\sqrt{n})$. In contrast, under the same assumptions, existing estimators’ confidence interval sizes are $\Omega(1)$ irrespective of $\epsilon$ and $n$. Crucially, our estimator is data-dependent and we show that no data-independent CIPW estimator can be robust to inaccuracies. } }
Endnote
%0 Conference Paper %T Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening (Extended Abstract) %A Alkis Kalavasis %A Anay Mehrotra %A Manolis Zampetakis %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-kalavasis24a %I PMLR %P 2767--2767 %U https://proceedings.mlr.press/v247/kalavasis24a.html %V 247 %X Inverse propensity-score weighted (IPW) estimators are prevalent in causal inference for estimating average treatment effects in observational studies. Under unconfoundedness, given accurate propensity scores and $n$ samples, the size of confidence intervals of IPW estimators scales down with $n$, and, several of their variants improve the rate of scaling. However, neither IPW estimators nor their variants are robust to inaccuracies: even if a single covariate has an $\epsilon>0$ additive error in the propensity score, the size of confidence intervals of these estimators can increase arbitrarily. Moreover, even without errors, the rate with which the confidence intervals of these estimators go to zero with $n$ can be arbitrarily slow in the presence of extreme propensity scores (those close to 0 or 1). We introduce a family of Coarse IPW (CIPW) estimators that captures existing IPW estimators and their variants. Each CIPW estimator is an IPW estimator on a coarsened covariate space, where certain covariates are merged. Under mild assumptions, e.g., Lipschitzness in expected outcomes and sparsity of extreme propensity scores, we give an efficient algorithm to find a robust estimator: given $\epsilon$-inaccurate propensity scores and $n$ samples, its confidence interval size scales with $\epsilon+(1/\sqrt{n})$. In contrast, under the same assumptions, existing estimators’ confidence interval sizes are $\Omega(1)$ irrespective of $\epsilon$ and $n$. Crucially, our estimator is data-dependent and we show that no data-independent CIPW estimator can be robust to inaccuracies.
APA
Kalavasis, A., Mehrotra, A. & Zampetakis, M.. (2024). Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening (Extended Abstract). Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:2767-2767 Available from https://proceedings.mlr.press/v247/kalavasis24a.html.

Related Material