Continually Updating Neural Causal Models

Florian Peter Busch, Jonas Seng, Moritz Willig, Matej Zečević
Proceedings of The First AAAI Bridge Program on Continual Causality, PMLR 208:30-37, 2023.

Abstract

A common assumption in causal modelling is that the relations between variables are fixed mechanisms. But in reality, these mechanisms often change over time and new data might not fit the original model as well. But is it reasonable to regularly train new models or can we update a single model continually instead? We propose utilizing the field of continual learning to help keep causal models updated over time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v208-busch23a, title = {Continually Updating Neural Causal Models}, author = {Busch, Florian Peter and Seng, Jonas and Willig, Moritz and Ze\v{c}evi\'c, Matej}, booktitle = {Proceedings of The First AAAI Bridge Program on Continual Causality}, pages = {30--37}, year = {2023}, editor = {Mundt, Martin and Cooper, Keiland W. and Dhami, Devendra Singh and Ribeiro, Adéle and Smith, James Seale and Bellot, Alexis and Hayes, Tyler}, volume = {208}, series = {Proceedings of Machine Learning Research}, month = {07--08 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v208/busch23a/busch23a.pdf}, url = {https://proceedings.mlr.press/v208/busch23a.html}, abstract = {A common assumption in causal modelling is that the relations between variables are fixed mechanisms. But in reality, these mechanisms often change over time and new data might not fit the original model as well. But is it reasonable to regularly train new models or can we update a single model continually instead? We propose utilizing the field of continual learning to help keep causal models updated over time.} }
Endnote
%0 Conference Paper %T Continually Updating Neural Causal Models %A Florian Peter Busch %A Jonas Seng %A Moritz Willig %A Matej Zečević %B Proceedings of The First AAAI Bridge Program on Continual Causality %C Proceedings of Machine Learning Research %D 2023 %E Martin Mundt %E Keiland W. Cooper %E Devendra Singh Dhami %E Adéle Ribeiro %E James Seale Smith %E Alexis Bellot %E Tyler Hayes %F pmlr-v208-busch23a %I PMLR %P 30--37 %U https://proceedings.mlr.press/v208/busch23a.html %V 208 %X A common assumption in causal modelling is that the relations between variables are fixed mechanisms. But in reality, these mechanisms often change over time and new data might not fit the original model as well. But is it reasonable to regularly train new models or can we update a single model continually instead? We propose utilizing the field of continual learning to help keep causal models updated over time.
APA
Busch, F.P., Seng, J., Willig, M. & Zečević, M.. (2023). Continually Updating Neural Causal Models. Proceedings of The First AAAI Bridge Program on Continual Causality, in Proceedings of Machine Learning Research 208:30-37 Available from https://proceedings.mlr.press/v208/busch23a.html.

Related Material