Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning

Jing Xu, Jingzhao Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:55501-55524, 2024.

Abstract

Fine-tuning large language models (LLM) can be costly. Parameter-efficient fine-tuning (PEFT) addresses the problems by training a fraction of the parameters, whose success reveals the expressiveness and flexibility of pretrained models. This paper studies the limit of PEFT, by further simplifying its design and reducing the number of trainable parameters beyond standard setups. To this end, we use Random Masking to fine-tune the pretrained model. Despite its simplicity, we show that Random Masking is surprisingly effective: with a larger-than-expected learning rate, Random Masking can match the performance of standard PEFT algorithms such as LoRA on various tasks, using fewer trainable parameters. We provide both empirical and theoretical explorations into the success of Random Masking. We show that masking induces a flatter loss landscape and more distant solutions, which allows for and necessitates large learning rates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-xu24ag, title = {Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning}, author = {Xu, Jing and Zhang, Jingzhao}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {55501--55524}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/xu24ag/xu24ag.pdf}, url = {https://proceedings.mlr.press/v235/xu24ag.html}, abstract = {Fine-tuning large language models (LLM) can be costly. Parameter-efficient fine-tuning (PEFT) addresses the problems by training a fraction of the parameters, whose success reveals the expressiveness and flexibility of pretrained models. This paper studies the limit of PEFT, by further simplifying its design and reducing the number of trainable parameters beyond standard setups. To this end, we use Random Masking to fine-tune the pretrained model. Despite its simplicity, we show that Random Masking is surprisingly effective: with a larger-than-expected learning rate, Random Masking can match the performance of standard PEFT algorithms such as LoRA on various tasks, using fewer trainable parameters. We provide both empirical and theoretical explorations into the success of Random Masking. We show that masking induces a flatter loss landscape and more distant solutions, which allows for and necessitates large learning rates.} }
Endnote
%0 Conference Paper %T Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning %A Jing Xu %A Jingzhao Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-xu24ag %I PMLR %P 55501--55524 %U https://proceedings.mlr.press/v235/xu24ag.html %V 235 %X Fine-tuning large language models (LLM) can be costly. Parameter-efficient fine-tuning (PEFT) addresses the problems by training a fraction of the parameters, whose success reveals the expressiveness and flexibility of pretrained models. This paper studies the limit of PEFT, by further simplifying its design and reducing the number of trainable parameters beyond standard setups. To this end, we use Random Masking to fine-tune the pretrained model. Despite its simplicity, we show that Random Masking is surprisingly effective: with a larger-than-expected learning rate, Random Masking can match the performance of standard PEFT algorithms such as LoRA on various tasks, using fewer trainable parameters. We provide both empirical and theoretical explorations into the success of Random Masking. We show that masking induces a flatter loss landscape and more distant solutions, which allows for and necessitates large learning rates.
APA
Xu, J. & Zhang, J.. (2024). Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:55501-55524 Available from https://proceedings.mlr.press/v235/xu24ag.html.

Related Material