Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models

Sayed Mohammadreza Tayaranian Hosseini, Seyyed Hasan Mozafari, Brett H. Meyer, James J. Clark, Warren J. Gross
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:1-28, 2025.

Abstract

Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average 3Transformerbasedlanguagemodelshaveshownstateoftheartperformanceonavarietyofnaturallanguageunderstandingtasks.Toachievethisperformance,thesemodelsarefirstpretrainedongeneralcorpusandthenfinetunedondownstreamtasks.Previousworkstudiedtheeffectofpruningthetrainingsetofthedownstreamtasksontheperformanceofthemodelonitsevaluationset.Inthiswork,weproposeanautomaticdatasetpruningmethodforthetrainingsetoffinetuningtasks.Ourmethodisbasedonthemodelssuccessrateincorrectlyclassifyingeachtrainingdatapoint.Unlikepreviousworkwhichreliesonuserfeedbacktodeterminesubsetsize,ourmethodautomaticallyextractstrainingsubsetsthatareadaptedforeachpairofmodelandfinetuningtask.Ourmethodprovidesmultiplesubsetsforuseindatasetpruningthatnavigatethetradeoffbetweensubsetsizeandevaluationaccuracy.Ourlargestsubset,whichwealsorefertoasthewinningticketsubset,isonaverage3 \\timessmallerthantheoriginaltrainingsetofthefinetuningtask.Ourexperimentson5downstreamtasksand2languagemodelsshowthat,onaverage,finetuningonthewinningticketsubsetsresultsina0.1 \\%$ increase in the evaluation performance of the model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-hosseini25a, title = {Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models}, author = {Hosseini, Sayed Mohammadreza Tayaranian and Mozafari, Seyyed Hasan and Meyer, Brett H. and Clark, James J. and Gross, Warren J.}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {1--28}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/hosseini25a/hosseini25a.pdf}, url = {https://proceedings.mlr.press/v274/hosseini25a.html}, abstract = {Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average $3 Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average $3 \\times$ smaller than the original training set of the fine-tuning task. Our experiments on 5 downstream tasks and 2 language models show that, on average, fine-tuning on the winning ticket subsets results in a $0.1 \\%$ increase in the evaluation performance of the model.} }
Endnote
%0 Conference Paper %T Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models %A Sayed Mohammadreza Tayaranian Hosseini %A Seyyed Hasan Mozafari %A Brett H. Meyer %A James J. Clark %A Warren J. Gross %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-hosseini25a %I PMLR %P 1--28 %U https://proceedings.mlr.press/v274/hosseini25a.html %V 274 %X Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average $3 Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average $3 \\times$ smaller than the original training set of the fine-tuning task. Our experiments on 5 downstream tasks and 2 language models show that, on average, fine-tuning on the winning ticket subsets results in a $0.1 \\%$ increase in the evaluation performance of the model.
APA
Hosseini, S.M.T., Mozafari, S.H., Meyer, B.H., Clark, J.J. & Gross, W.J.. (2025). Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:1-28 Available from https://proceedings.mlr.press/v274/hosseini25a.html.

Related Material