TabFSBench: Tabular Benchmark for Feature Shifts in Open Environments

Zi-Jian Cheng, Ziyi Jia, Zhi Zhou, Yu-Feng Li, Lan-Zhe Guo
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:10025-10089, 2025.

Abstract

Tabular data is widely utilized in various machine learning tasks. Current tabular learning research predominantly focuses on closed environments, while in real-world applications, open environments are often encountered, where distribution and feature shifts occur, leading to significant degradation in model performance. Previous research has primarily concentrated on mitigating distribution shifts, whereas feature shifts, a distinctive and unexplored challenge of tabular data, have garnered limited attention. To this end, this paper conducts the first comprehensive study on feature shifts in tabular data and introduces the first tabular feature-shift benchmark (TabFSBench). TabFSBench evaluates impacts of four distinct feature-shift scenarios on four tabular model categories across various datasets and assesses the performance of large language models (LLMs) and tabular LLMs in the tabular benchmark for the first time. Our study demonstrates three main observations: (1) most tabular models have the limited applicability in feature-shift scenarios; (2) the shifted feature set importance has a linear relationship with model performance degradation; (3) model performance in closed environments correlates with feature-shift performance. Future research direction is also explored for each observation. Benchmark: LAMDASZ-ML/TabFSBench.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cheng25e, title = {{T}ab{FSB}ench: Tabular Benchmark for Feature Shifts in Open Environments}, author = {Cheng, Zi-Jian and Jia, Ziyi and Zhou, Zhi and Li, Yu-Feng and Guo, Lan-Zhe}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {10025--10089}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cheng25e/cheng25e.pdf}, url = {https://proceedings.mlr.press/v267/cheng25e.html}, abstract = {Tabular data is widely utilized in various machine learning tasks. Current tabular learning research predominantly focuses on closed environments, while in real-world applications, open environments are often encountered, where distribution and feature shifts occur, leading to significant degradation in model performance. Previous research has primarily concentrated on mitigating distribution shifts, whereas feature shifts, a distinctive and unexplored challenge of tabular data, have garnered limited attention. To this end, this paper conducts the first comprehensive study on feature shifts in tabular data and introduces the first tabular feature-shift benchmark (TabFSBench). TabFSBench evaluates impacts of four distinct feature-shift scenarios on four tabular model categories across various datasets and assesses the performance of large language models (LLMs) and tabular LLMs in the tabular benchmark for the first time. Our study demonstrates three main observations: (1) most tabular models have the limited applicability in feature-shift scenarios; (2) the shifted feature set importance has a linear relationship with model performance degradation; (3) model performance in closed environments correlates with feature-shift performance. Future research direction is also explored for each observation. Benchmark: LAMDASZ-ML/TabFSBench.} }
Endnote
%0 Conference Paper %T TabFSBench: Tabular Benchmark for Feature Shifts in Open Environments %A Zi-Jian Cheng %A Ziyi Jia %A Zhi Zhou %A Yu-Feng Li %A Lan-Zhe Guo %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cheng25e %I PMLR %P 10025--10089 %U https://proceedings.mlr.press/v267/cheng25e.html %V 267 %X Tabular data is widely utilized in various machine learning tasks. Current tabular learning research predominantly focuses on closed environments, while in real-world applications, open environments are often encountered, where distribution and feature shifts occur, leading to significant degradation in model performance. Previous research has primarily concentrated on mitigating distribution shifts, whereas feature shifts, a distinctive and unexplored challenge of tabular data, have garnered limited attention. To this end, this paper conducts the first comprehensive study on feature shifts in tabular data and introduces the first tabular feature-shift benchmark (TabFSBench). TabFSBench evaluates impacts of four distinct feature-shift scenarios on four tabular model categories across various datasets and assesses the performance of large language models (LLMs) and tabular LLMs in the tabular benchmark for the first time. Our study demonstrates three main observations: (1) most tabular models have the limited applicability in feature-shift scenarios; (2) the shifted feature set importance has a linear relationship with model performance degradation; (3) model performance in closed environments correlates with feature-shift performance. Future research direction is also explored for each observation. Benchmark: LAMDASZ-ML/TabFSBench.
APA
Cheng, Z., Jia, Z., Zhou, Z., Li, Y. & Guo, L.. (2025). TabFSBench: Tabular Benchmark for Feature Shifts in Open Environments. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:10025-10089 Available from https://proceedings.mlr.press/v267/cheng25e.html.

Related Material