TabFlex: Scaling Tabular Learning to Millions with Linear Attention

Yuchen Zeng, Tuan Dinh, Wonjun Kang, Andreas C Mueller
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:74051-74079, 2025.

Abstract

Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2$\times$ speedup compared to TabPFN and a 1.5$\times$ speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zeng25b, title = {{T}ab{F}lex: Scaling Tabular Learning to Millions with Linear Attention}, author = {Zeng, Yuchen and Dinh, Tuan and Kang, Wonjun and Mueller, Andreas C}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {74051--74079}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zeng25b/zeng25b.pdf}, url = {https://proceedings.mlr.press/v267/zeng25b.html}, abstract = {Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2$\times$ speedup compared to TabPFN and a 1.5$\times$ speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.} }
Endnote
%0 Conference Paper %T TabFlex: Scaling Tabular Learning to Millions with Linear Attention %A Yuchen Zeng %A Tuan Dinh %A Wonjun Kang %A Andreas C Mueller %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zeng25b %I PMLR %P 74051--74079 %U https://proceedings.mlr.press/v267/zeng25b.html %V 267 %X Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2$\times$ speedup compared to TabPFN and a 1.5$\times$ speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.
APA
Zeng, Y., Dinh, T., Kang, W. & Mueller, A.C.. (2025). TabFlex: Scaling Tabular Learning to Millions with Linear Attention. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:74051-74079 Available from https://proceedings.mlr.press/v267/zeng25b.html.

Related Material