Leveraged Weighted Loss for Partial Label Learning

Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, Zhouchen Lin
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11091-11100, 2021.

Abstract

As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-wen21a, title = {Leveraged Weighted Loss for Partial Label Learning}, author = {Wen, Hongwei and Cui, Jingyi and Hang, Hanyuan and Liu, Jiabin and Wang, Yisen and Lin, Zhouchen}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11091--11100}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/wen21a/wen21a.pdf}, url = {https://proceedings.mlr.press/v139/wen21a.html}, abstract = {As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.} }
Endnote
%0 Conference Paper %T Leveraged Weighted Loss for Partial Label Learning %A Hongwei Wen %A Jingyi Cui %A Hanyuan Hang %A Jiabin Liu %A Yisen Wang %A Zhouchen Lin %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-wen21a %I PMLR %P 11091--11100 %U https://proceedings.mlr.press/v139/wen21a.html %V 139 %X As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.
APA
Wen, H., Cui, J., Hang, H., Liu, J., Wang, Y. & Lin, Z.. (2021). Leveraged Weighted Loss for Partial Label Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11091-11100 Available from https://proceedings.mlr.press/v139/wen21a.html.

Related Material