Domain Generalisation via Imprecise Learning

Anurag Singh, Siu Lun Chau, Shahine Bouabid, Krikamol Muandet
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:45544-45570, 2024.

Abstract

Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g. optimise based on the average-case risk, worst-case risk, or interpolations thereof. While this decision should in principle be decided by the model operator like medical doctors in practice, this information might not always be available at training time. This situation leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Our work, supported by theoretical and empirical evidence, showcases the benefits of integrating imprecision into domain generalisation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-singh24a, title = {Domain Generalisation via Imprecise Learning}, author = {Singh, Anurag and Chau, Siu Lun and Bouabid, Shahine and Muandet, Krikamol}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {45544--45570}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24a/singh24a.pdf}, url = {https://proceedings.mlr.press/v235/singh24a.html}, abstract = {Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g. optimise based on the average-case risk, worst-case risk, or interpolations thereof. While this decision should in principle be decided by the model operator like medical doctors in practice, this information might not always be available at training time. This situation leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Our work, supported by theoretical and empirical evidence, showcases the benefits of integrating imprecision into domain generalisation.} }
Endnote
%0 Conference Paper %T Domain Generalisation via Imprecise Learning %A Anurag Singh %A Siu Lun Chau %A Shahine Bouabid %A Krikamol Muandet %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-singh24a %I PMLR %P 45544--45570 %U https://proceedings.mlr.press/v235/singh24a.html %V 235 %X Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g. optimise based on the average-case risk, worst-case risk, or interpolations thereof. While this decision should in principle be decided by the model operator like medical doctors in practice, this information might not always be available at training time. This situation leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Our work, supported by theoretical and empirical evidence, showcases the benefits of integrating imprecision into domain generalisation.
APA
Singh, A., Chau, S.L., Bouabid, S. & Muandet, K.. (2024). Domain Generalisation via Imprecise Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:45544-45570 Available from https://proceedings.mlr.press/v235/singh24a.html.

Related Material