Differentiable and Transportable Structure Learning

Jeroen Berrevoets, Nabeel Seedat, Fergus Imrie, Mihaela Van Der Schaar
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:2206-2233, 2023.

Abstract

Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure. However, compute required to infer these structures is typically super-exponential in the number of variables, as inference requires a sweep of a combinatorially large space of potential structures. That is, until recent advances made it possible to search this space using a differentiable metric, drastically reducing search time. While this technique— named NOTEARS —is widely considered a seminal work in DAG-discovery, it concedes an important property in favour of differentiability: transportability. To be transportable, the structures discovered on one dataset must apply to another dataset from the same domain. We introduce D-Struct which recovers transportability in the discovered structures through a novel architecture and loss function while remaining fully differentiable. Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures, as was previously done with NOTEARS. In our experiments, we empirically validate D-Struct with respect to edge accuracy and structural Hamming distance in a variety of settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-berrevoets23a, title = {Differentiable and Transportable Structure Learning}, author = {Berrevoets, Jeroen and Seedat, Nabeel and Imrie, Fergus and Van Der Schaar, Mihaela}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {2206--2233}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/berrevoets23a/berrevoets23a.pdf}, url = {https://proceedings.mlr.press/v202/berrevoets23a.html}, abstract = {Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure. However, compute required to infer these structures is typically super-exponential in the number of variables, as inference requires a sweep of a combinatorially large space of potential structures. That is, until recent advances made it possible to search this space using a differentiable metric, drastically reducing search time. While this technique— named NOTEARS —is widely considered a seminal work in DAG-discovery, it concedes an important property in favour of differentiability: transportability. To be transportable, the structures discovered on one dataset must apply to another dataset from the same domain. We introduce D-Struct which recovers transportability in the discovered structures through a novel architecture and loss function while remaining fully differentiable. Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures, as was previously done with NOTEARS. In our experiments, we empirically validate D-Struct with respect to edge accuracy and structural Hamming distance in a variety of settings.} }
Endnote
%0 Conference Paper %T Differentiable and Transportable Structure Learning %A Jeroen Berrevoets %A Nabeel Seedat %A Fergus Imrie %A Mihaela Van Der Schaar %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-berrevoets23a %I PMLR %P 2206--2233 %U https://proceedings.mlr.press/v202/berrevoets23a.html %V 202 %X Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure. However, compute required to infer these structures is typically super-exponential in the number of variables, as inference requires a sweep of a combinatorially large space of potential structures. That is, until recent advances made it possible to search this space using a differentiable metric, drastically reducing search time. While this technique— named NOTEARS —is widely considered a seminal work in DAG-discovery, it concedes an important property in favour of differentiability: transportability. To be transportable, the structures discovered on one dataset must apply to another dataset from the same domain. We introduce D-Struct which recovers transportability in the discovered structures through a novel architecture and loss function while remaining fully differentiable. Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures, as was previously done with NOTEARS. In our experiments, we empirically validate D-Struct with respect to edge accuracy and structural Hamming distance in a variety of settings.
APA
Berrevoets, J., Seedat, N., Imrie, F. & Van Der Schaar, M.. (2023). Differentiable and Transportable Structure Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:2206-2233 Available from https://proceedings.mlr.press/v202/berrevoets23a.html.

Related Material