Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning

Zhe Zhao, Pengkun Wang, Haibin Wen, Wei Xu, Lai Song, Qingfu Zhang, Yang Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:61040-61071, 2024.

Abstract

Real-world data generally follows a long-tailed distribution, which makes traditional high-performance training strategies unable to show their usual effects. Various insights have been proposed to alleviate this challenging distribution. However, some observations indicate that models trained on long-tailed distributions always show a trade-off between the performance of head and tail classes. For a profound understanding of the trade-off, we first theoretically analyze the trade-off problem in long-tailed learning and creatively transform the trade-off problem in long-tailed learning into a multi-objective optimization (MOO) problem. Motivated by these analyses, we propose the idea of strategy fusion for MOO long-tailed learning and point out the potential conflict problem. We further design a Multi-Objective Optimization based Strategy Fusion (MOOSF), which effectively resolves conflicts, and achieves an efficient fusion of heterogeneous strategies. Comprehensive experiments on mainstream datasets show that even the simplest strategy fusion can outperform complex long-tailed strategies. More importantly, it provides a new perspective for generalized long-tailed learning. The code is available in the accompanying supplementary materials.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhao24o, title = {Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning}, author = {Zhao, Zhe and Wang, Pengkun and Wen, Haibin and Xu, Wei and Song, Lai and Zhang, Qingfu and Wang, Yang}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {61040--61071}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24o/zhao24o.pdf}, url = {https://proceedings.mlr.press/v235/zhao24o.html}, abstract = {Real-world data generally follows a long-tailed distribution, which makes traditional high-performance training strategies unable to show their usual effects. Various insights have been proposed to alleviate this challenging distribution. However, some observations indicate that models trained on long-tailed distributions always show a trade-off between the performance of head and tail classes. For a profound understanding of the trade-off, we first theoretically analyze the trade-off problem in long-tailed learning and creatively transform the trade-off problem in long-tailed learning into a multi-objective optimization (MOO) problem. Motivated by these analyses, we propose the idea of strategy fusion for MOO long-tailed learning and point out the potential conflict problem. We further design a Multi-Objective Optimization based Strategy Fusion (MOOSF), which effectively resolves conflicts, and achieves an efficient fusion of heterogeneous strategies. Comprehensive experiments on mainstream datasets show that even the simplest strategy fusion can outperform complex long-tailed strategies. More importantly, it provides a new perspective for generalized long-tailed learning. The code is available in the accompanying supplementary materials.} }
Endnote
%0 Conference Paper %T Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning %A Zhe Zhao %A Pengkun Wang %A Haibin Wen %A Wei Xu %A Lai Song %A Qingfu Zhang %A Yang Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhao24o %I PMLR %P 61040--61071 %U https://proceedings.mlr.press/v235/zhao24o.html %V 235 %X Real-world data generally follows a long-tailed distribution, which makes traditional high-performance training strategies unable to show their usual effects. Various insights have been proposed to alleviate this challenging distribution. However, some observations indicate that models trained on long-tailed distributions always show a trade-off between the performance of head and tail classes. For a profound understanding of the trade-off, we first theoretically analyze the trade-off problem in long-tailed learning and creatively transform the trade-off problem in long-tailed learning into a multi-objective optimization (MOO) problem. Motivated by these analyses, we propose the idea of strategy fusion for MOO long-tailed learning and point out the potential conflict problem. We further design a Multi-Objective Optimization based Strategy Fusion (MOOSF), which effectively resolves conflicts, and achieves an efficient fusion of heterogeneous strategies. Comprehensive experiments on mainstream datasets show that even the simplest strategy fusion can outperform complex long-tailed strategies. More importantly, it provides a new perspective for generalized long-tailed learning. The code is available in the accompanying supplementary materials.
APA
Zhao, Z., Wang, P., Wen, H., Xu, W., Song, L., Zhang, Q. & Wang, Y.. (2024). Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:61040-61071 Available from https://proceedings.mlr.press/v235/zhao24o.html.

Related Material