Position: Why Tabular Foundation Models Should Be a Research Priority

Boris Van Breugel, Mihaela Van Der Schaar
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:48976-48993, 2024.

Abstract

Recent text and image foundation models are incredibly impressive, and these models are attracting an ever-increasing portion of research resources. In this position piece we aim to shift the ML research community’s priorities ever so slightly to a different modality: tabular data. Tabular data is the dominant modality in many fields, yet it is given hardly any research attention and significantly lags behind in terms of scale and power. We believe the time is now to start developing tabular foundation models, or what we coin a Large Tabular Model (LTM). LTMs could revolutionise the way science and ML use tabular data: not as single datasets that are analyzed in a vacuum, but contextualized with respect to related datasets. The potential impact is far-reaching: from few-shot tabular models to automating data science; from out-of-distribution synthetic data to empowering multidisciplinary scientific discovery. We intend to excite reflections on the modalities we study, and convince some researchers to study Large Tabular Models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-van-breugel24a, title = {Position: Why Tabular Foundation Models Should Be a Research Priority}, author = {Van Breugel, Boris and Van Der Schaar, Mihaela}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {48976--48993}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/van-breugel24a/van-breugel24a.pdf}, url = {https://proceedings.mlr.press/v235/van-breugel24a.html}, abstract = {Recent text and image foundation models are incredibly impressive, and these models are attracting an ever-increasing portion of research resources. In this position piece we aim to shift the ML research community’s priorities ever so slightly to a different modality: tabular data. Tabular data is the dominant modality in many fields, yet it is given hardly any research attention and significantly lags behind in terms of scale and power. We believe the time is now to start developing tabular foundation models, or what we coin a Large Tabular Model (LTM). LTMs could revolutionise the way science and ML use tabular data: not as single datasets that are analyzed in a vacuum, but contextualized with respect to related datasets. The potential impact is far-reaching: from few-shot tabular models to automating data science; from out-of-distribution synthetic data to empowering multidisciplinary scientific discovery. We intend to excite reflections on the modalities we study, and convince some researchers to study Large Tabular Models.} }
Endnote
%0 Conference Paper %T Position: Why Tabular Foundation Models Should Be a Research Priority %A Boris Van Breugel %A Mihaela Van Der Schaar %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-van-breugel24a %I PMLR %P 48976--48993 %U https://proceedings.mlr.press/v235/van-breugel24a.html %V 235 %X Recent text and image foundation models are incredibly impressive, and these models are attracting an ever-increasing portion of research resources. In this position piece we aim to shift the ML research community’s priorities ever so slightly to a different modality: tabular data. Tabular data is the dominant modality in many fields, yet it is given hardly any research attention and significantly lags behind in terms of scale and power. We believe the time is now to start developing tabular foundation models, or what we coin a Large Tabular Model (LTM). LTMs could revolutionise the way science and ML use tabular data: not as single datasets that are analyzed in a vacuum, but contextualized with respect to related datasets. The potential impact is far-reaching: from few-shot tabular models to automating data science; from out-of-distribution synthetic data to empowering multidisciplinary scientific discovery. We intend to excite reflections on the modalities we study, and convince some researchers to study Large Tabular Models.
APA
Van Breugel, B. & Van Der Schaar, M.. (2024). Position: Why Tabular Foundation Models Should Be a Research Priority. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:48976-48993 Available from https://proceedings.mlr.press/v235/van-breugel24a.html.

Related Material