Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts

Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, Yu-Feng Li
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:45014-45039, 2024.

Abstract

The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-shi24g, title = {Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts}, author = {Shi, Jiang-Xin and Wei, Tong and Zhou, Zhi and Shao, Jie-Jing and Han, Xin-Yan and Li, Yu-Feng}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {45014--45039}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24g/shi24g.pdf}, url = {https://proceedings.mlr.press/v235/shi24g.html}, abstract = {The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.} }
Endnote
%0 Conference Paper %T Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts %A Jiang-Xin Shi %A Tong Wei %A Zhi Zhou %A Jie-Jing Shao %A Xin-Yan Han %A Yu-Feng Li %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-shi24g %I PMLR %P 45014--45039 %U https://proceedings.mlr.press/v235/shi24g.html %V 235 %X The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.
APA
Shi, J., Wei, T., Zhou, Z., Shao, J., Han, X. & Li, Y.. (2024). Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:45014-45039 Available from https://proceedings.mlr.press/v235/shi24g.html.

Related Material