Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality

Ikko Yamane, Yann Chevaleyre, Takashi Ishida, Florian Yger
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:4768-4801, 2023.

Abstract

In mediated uncoupled learning (MU-learning), the goal is to predict an output variable $Y$ given an input variable $X$ as in ordinary supervised learning while the training dataset has no joint samples of $(X, Y)$ but only independent samples of $(X, U)$ and $(U, Y)$ each observed with a mediating variable $U$. The existing MU-learning methods can only handle the squared loss, which prohibited the use of other popular loss functions such as the cross-entropy loss. We propose a general MU-learning framework that allows for the problems with Bregman divergences, which cover a wide range of loss functions useful for various types of tasks, in a unified manner. This loss family has maximal generality among those whose minimizers characterize the conditional expectation. We prove that the proposed objective function is a tighter approximation to the oracle loss that one would minimize if ordinary supervised samples of $(X, Y)$ were available. We also propose an estimator of an interval containing the expected test loss of predictions of a trained model only using $(X, U)$- and $(U, Y)$-data. We provide a theoretical analysis on the excess risk for the proposed method and confirm its practical usefulness with regression experiments with synthetic data and low-quality image classification experiments with benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-yamane23a, title = {Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality}, author = {Yamane, Ikko and Chevaleyre, Yann and Ishida, Takashi and Yger, Florian}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {4768--4801}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/yamane23a/yamane23a.pdf}, url = {https://proceedings.mlr.press/v206/yamane23a.html}, abstract = {In mediated uncoupled learning (MU-learning), the goal is to predict an output variable $Y$ given an input variable $X$ as in ordinary supervised learning while the training dataset has no joint samples of $(X, Y)$ but only independent samples of $(X, U)$ and $(U, Y)$ each observed with a mediating variable $U$. The existing MU-learning methods can only handle the squared loss, which prohibited the use of other popular loss functions such as the cross-entropy loss. We propose a general MU-learning framework that allows for the problems with Bregman divergences, which cover a wide range of loss functions useful for various types of tasks, in a unified manner. This loss family has maximal generality among those whose minimizers characterize the conditional expectation. We prove that the proposed objective function is a tighter approximation to the oracle loss that one would minimize if ordinary supervised samples of $(X, Y)$ were available. We also propose an estimator of an interval containing the expected test loss of predictions of a trained model only using $(X, U)$- and $(U, Y)$-data. We provide a theoretical analysis on the excess risk for the proposed method and confirm its practical usefulness with regression experiments with synthetic data and low-quality image classification experiments with benchmark datasets.} }
Endnote
%0 Conference Paper %T Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality %A Ikko Yamane %A Yann Chevaleyre %A Takashi Ishida %A Florian Yger %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-yamane23a %I PMLR %P 4768--4801 %U https://proceedings.mlr.press/v206/yamane23a.html %V 206 %X In mediated uncoupled learning (MU-learning), the goal is to predict an output variable $Y$ given an input variable $X$ as in ordinary supervised learning while the training dataset has no joint samples of $(X, Y)$ but only independent samples of $(X, U)$ and $(U, Y)$ each observed with a mediating variable $U$. The existing MU-learning methods can only handle the squared loss, which prohibited the use of other popular loss functions such as the cross-entropy loss. We propose a general MU-learning framework that allows for the problems with Bregman divergences, which cover a wide range of loss functions useful for various types of tasks, in a unified manner. This loss family has maximal generality among those whose minimizers characterize the conditional expectation. We prove that the proposed objective function is a tighter approximation to the oracle loss that one would minimize if ordinary supervised samples of $(X, Y)$ were available. We also propose an estimator of an interval containing the expected test loss of predictions of a trained model only using $(X, U)$- and $(U, Y)$-data. We provide a theoretical analysis on the excess risk for the proposed method and confirm its practical usefulness with regression experiments with synthetic data and low-quality image classification experiments with benchmark datasets.
APA
Yamane, I., Chevaleyre, Y., Ishida, T. & Yger, F.. (2023). Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:4768-4801 Available from https://proceedings.mlr.press/v206/yamane23a.html.

Related Material