Permutation Weighting

David Arbour, Drew Dimmery, Arjun Sondhi
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:331-341, 2021.

Abstract

A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). A large class of probabilistic classifiers may be used in this method; the choice of loss for the classifier implies the particular definition of balance. We bound bias and variance in terms of the excess risk of the classifier, show that these disappear asymptotically, and demonstrate that our classification problem directly minimizes imbalance. Additionally, hyper-parameter tuning and model selection can be performed with standard cross-validation methods. Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-arbour21a, title = {Permutation Weighting}, author = {Arbour, David and Dimmery, Drew and Sondhi, Arjun}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {331--341}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/arbour21a/arbour21a.pdf}, url = {https://proceedings.mlr.press/v139/arbour21a.html}, abstract = {A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). A large class of probabilistic classifiers may be used in this method; the choice of loss for the classifier implies the particular definition of balance. We bound bias and variance in terms of the excess risk of the classifier, show that these disappear asymptotically, and demonstrate that our classification problem directly minimizes imbalance. Additionally, hyper-parameter tuning and model selection can be performed with standard cross-validation methods. Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods.} }
Endnote
%0 Conference Paper %T Permutation Weighting %A David Arbour %A Drew Dimmery %A Arjun Sondhi %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-arbour21a %I PMLR %P 331--341 %U https://proceedings.mlr.press/v139/arbour21a.html %V 139 %X A commonly applied approach for estimating causal effects from observational data is to apply weights which render treatments independent of observed pre-treatment covariates. Recently emphasis has been placed on deriving balancing weights which explicitly target this independence condition. In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). A large class of probabilistic classifiers may be used in this method; the choice of loss for the classifier implies the particular definition of balance. We bound bias and variance in terms of the excess risk of the classifier, show that these disappear asymptotically, and demonstrate that our classification problem directly minimizes imbalance. Additionally, hyper-parameter tuning and model selection can be performed with standard cross-validation methods. Empirical evaluations indicate that permutation weighting provides favorable performance in comparison to existing methods.
APA
Arbour, D., Dimmery, D. & Sondhi, A.. (2021). Permutation Weighting. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:331-341 Available from https://proceedings.mlr.press/v139/arbour21a.html.

Related Material