Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes

Nabeel Seedat, Nicolas Huynh, Boris Van Breugel, Mihaela Van Der Schaar
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:44060-44092, 2024.

Abstract

Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce $\texttt{CLLM}$, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of $\texttt{CLLM}$ in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-seedat24a, title = {Curated {LLM}: Synergy of {LLM}s and Data Curation for tabular augmentation in low-data regimes}, author = {Seedat, Nabeel and Huynh, Nicolas and Van Breugel, Boris and Van Der Schaar, Mihaela}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {44060--44092}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/seedat24a/seedat24a.pdf}, url = {https://proceedings.mlr.press/v235/seedat24a.html}, abstract = {Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce $\texttt{CLLM}$, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of $\texttt{CLLM}$ in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.} }
Endnote
%0 Conference Paper %T Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes %A Nabeel Seedat %A Nicolas Huynh %A Boris Van Breugel %A Mihaela Van Der Schaar %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-seedat24a %I PMLR %P 44060--44092 %U https://proceedings.mlr.press/v235/seedat24a.html %V 235 %X Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce $\texttt{CLLM}$, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of $\texttt{CLLM}$ in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.
APA
Seedat, N., Huynh, N., Van Breugel, B. & Van Der Schaar, M.. (2024). Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:44060-44092 Available from https://proceedings.mlr.press/v235/seedat24a.html.

Related Material