Modeling Uplift from Observational Time-Series in Continual Scenarios

Sanghyun Kim, Jungwon Choi, NamHee Kim, Jaesung Ryu, Juho Lee
Proceedings of The First AAAI Bridge Program on Continual Causality, PMLR 208:75-84, 2023.

Abstract

As the importance of causality in machine learning grows, we expect the model to learn the correct causal mechanism for robustness even under distribution shifts. Since most of the prior benchmarks focused on vision and language tasks, domain or temporal shifts in causal inference tasks have not been well explored. To this end, we introduce Backend-TS dataset for modeling uplift in continual learning scenarios. We build the dataset with CRUD data and propose continual learning tasks under temporal and domain shifts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v208-kim23a, title = {Modeling Uplift from Observational Time-Series in Continual Scenarios}, author = {Kim, Sanghyun and Choi, Jungwon and Kim, NamHee and Ryu, Jaesung and Lee, Juho}, booktitle = {Proceedings of The First AAAI Bridge Program on Continual Causality}, pages = {75--84}, year = {2023}, editor = {Mundt, Martin and Cooper, Keiland W. and Dhami, Devendra Singh and Ribeiro, Adéle and Smith, James Seale and Bellot, Alexis and Hayes, Tyler}, volume = {208}, series = {Proceedings of Machine Learning Research}, month = {07--08 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v208/kim23a/kim23a.pdf}, url = {https://proceedings.mlr.press/v208/kim23a.html}, abstract = {As the importance of causality in machine learning grows, we expect the model to learn the correct causal mechanism for robustness even under distribution shifts. Since most of the prior benchmarks focused on vision and language tasks, domain or temporal shifts in causal inference tasks have not been well explored. To this end, we introduce Backend-TS dataset for modeling uplift in continual learning scenarios. We build the dataset with CRUD data and propose continual learning tasks under temporal and domain shifts.} }
Endnote
%0 Conference Paper %T Modeling Uplift from Observational Time-Series in Continual Scenarios %A Sanghyun Kim %A Jungwon Choi %A NamHee Kim %A Jaesung Ryu %A Juho Lee %B Proceedings of The First AAAI Bridge Program on Continual Causality %C Proceedings of Machine Learning Research %D 2023 %E Martin Mundt %E Keiland W. Cooper %E Devendra Singh Dhami %E Adéle Ribeiro %E James Seale Smith %E Alexis Bellot %E Tyler Hayes %F pmlr-v208-kim23a %I PMLR %P 75--84 %U https://proceedings.mlr.press/v208/kim23a.html %V 208 %X As the importance of causality in machine learning grows, we expect the model to learn the correct causal mechanism for robustness even under distribution shifts. Since most of the prior benchmarks focused on vision and language tasks, domain or temporal shifts in causal inference tasks have not been well explored. To this end, we introduce Backend-TS dataset for modeling uplift in continual learning scenarios. We build the dataset with CRUD data and propose continual learning tasks under temporal and domain shifts.
APA
Kim, S., Choi, J., Kim, N., Ryu, J. & Lee, J.. (2023). Modeling Uplift from Observational Time-Series in Continual Scenarios. Proceedings of The First AAAI Bridge Program on Continual Causality, in Proceedings of Machine Learning Research 208:75-84 Available from https://proceedings.mlr.press/v208/kim23a.html.

Related Material