Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency

Zexu Sun, Qiyu Han, Hao Yang, Anpeng Wu, Minqin Zhu, Dugang Liu, Chen Ma, Yunpeng Weng, Xing Tang, Xiuqiang He
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:57450-57468, 2025.

Abstract

In online platforms, incentives (e.g., discounts, coupons) are used to boost user engagement and revenue. Uplift modeling methods are developed to estimate user responses from observational data, often incorporating distribution balancing to address selection bias. However, these methods are limited by in-distribution testing data, which mirrors the training data distribution. In reality, user features change continuously due to time, geography, and other factors, especially on complex online marketing platforms. Thus, effective uplift modeling method for out-of-distribution data is crucial. To address this, we propose a novel uplift modeling method Invariant Deep Uplift Modeling, namely IDUM, which uses invariant learning to enhance out-of-distribution generalization by identifying causal factors that remain consistent across domains. IDUM further refines these features into necessary and sufficient factors and employs a masking component to reduce computational costs by selecting the most informative invariant features. A balancing discrepancy component is also introduced to mitigate selection bias in observational data. We conduct extensive experiments on public and real-world datasets to demonstrate IDUM’s effectiveness in both in-distribution and out-of-distribution scenarios in online marketing. Furthermore, we also provide theoretical analysis and related proofs to support our IDUM’s generalizability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sun25f, title = {Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency}, author = {Sun, Zexu and Han, Qiyu and Yang, Hao and Wu, Anpeng and Zhu, Minqin and Liu, Dugang and Ma, Chen and Weng, Yunpeng and Tang, Xing and He, Xiuqiang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {57450--57468}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sun25f/sun25f.pdf}, url = {https://proceedings.mlr.press/v267/sun25f.html}, abstract = {In online platforms, incentives (e.g., discounts, coupons) are used to boost user engagement and revenue. Uplift modeling methods are developed to estimate user responses from observational data, often incorporating distribution balancing to address selection bias. However, these methods are limited by in-distribution testing data, which mirrors the training data distribution. In reality, user features change continuously due to time, geography, and other factors, especially on complex online marketing platforms. Thus, effective uplift modeling method for out-of-distribution data is crucial. To address this, we propose a novel uplift modeling method Invariant Deep Uplift Modeling, namely IDUM, which uses invariant learning to enhance out-of-distribution generalization by identifying causal factors that remain consistent across domains. IDUM further refines these features into necessary and sufficient factors and employs a masking component to reduce computational costs by selecting the most informative invariant features. A balancing discrepancy component is also introduced to mitigate selection bias in observational data. We conduct extensive experiments on public and real-world datasets to demonstrate IDUM’s effectiveness in both in-distribution and out-of-distribution scenarios in online marketing. Furthermore, we also provide theoretical analysis and related proofs to support our IDUM’s generalizability.} }
Endnote
%0 Conference Paper %T Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency %A Zexu Sun %A Qiyu Han %A Hao Yang %A Anpeng Wu %A Minqin Zhu %A Dugang Liu %A Chen Ma %A Yunpeng Weng %A Xing Tang %A Xiuqiang He %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sun25f %I PMLR %P 57450--57468 %U https://proceedings.mlr.press/v267/sun25f.html %V 267 %X In online platforms, incentives (e.g., discounts, coupons) are used to boost user engagement and revenue. Uplift modeling methods are developed to estimate user responses from observational data, often incorporating distribution balancing to address selection bias. However, these methods are limited by in-distribution testing data, which mirrors the training data distribution. In reality, user features change continuously due to time, geography, and other factors, especially on complex online marketing platforms. Thus, effective uplift modeling method for out-of-distribution data is crucial. To address this, we propose a novel uplift modeling method Invariant Deep Uplift Modeling, namely IDUM, which uses invariant learning to enhance out-of-distribution generalization by identifying causal factors that remain consistent across domains. IDUM further refines these features into necessary and sufficient factors and employs a masking component to reduce computational costs by selecting the most informative invariant features. A balancing discrepancy component is also introduced to mitigate selection bias in observational data. We conduct extensive experiments on public and real-world datasets to demonstrate IDUM’s effectiveness in both in-distribution and out-of-distribution scenarios in online marketing. Furthermore, we also provide theoretical analysis and related proofs to support our IDUM’s generalizability.
APA
Sun, Z., Han, Q., Yang, H., Wu, A., Zhu, M., Liu, D., Ma, C., Weng, Y., Tang, X. & He, X.. (2025). Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:57450-57468 Available from https://proceedings.mlr.press/v267/sun25f.html.

Related Material