Domain Generalization via Nuclear Norm Regularization

Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, Yingyu Liang
Conference on Parsimony and Learning, PMLR 234:179-201, 2024.

Abstract

The ability to generalize to unseen domains is crucial for machine learning systems deployed in the real world, especially when we only have data from limited training domains. In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization. Intuitively, the proposed regularizer mitigates the impacts of environmental features and encourages learning domain-invariant features. Theoretically, we provide insights into why nuclear norm regularization is more effective compared to ERM and alternative regularization methods. Empirically, we conduct extensive experiments on both synthetic and real datasets. We show nuclear norm regularization achieves strong performance compared to baselines in a wide range of domain generalization tasks. Moreover, our regularizer is broadly applicable with various methods such as ERM and SWAD with consistently improved performance, e.g., 1.7% and 0.9% test accuracy improvements respectively on the DomainBed benchmark.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-shi24a, title = {Domain Generalization via Nuclear Norm Regularization}, author = {Shi, Zhenmei and Ming, Yifei and Fan, Ying and Sala, Frederic and Liang, Yingyu}, booktitle = {Conference on Parsimony and Learning}, pages = {179--201}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/shi24a/shi24a.pdf}, url = {https://proceedings.mlr.press/v234/shi24a.html}, abstract = {The ability to generalize to unseen domains is crucial for machine learning systems deployed in the real world, especially when we only have data from limited training domains. In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization. Intuitively, the proposed regularizer mitigates the impacts of environmental features and encourages learning domain-invariant features. Theoretically, we provide insights into why nuclear norm regularization is more effective compared to ERM and alternative regularization methods. Empirically, we conduct extensive experiments on both synthetic and real datasets. We show nuclear norm regularization achieves strong performance compared to baselines in a wide range of domain generalization tasks. Moreover, our regularizer is broadly applicable with various methods such as ERM and SWAD with consistently improved performance, e.g., 1.7% and 0.9% test accuracy improvements respectively on the DomainBed benchmark.} }
Endnote
%0 Conference Paper %T Domain Generalization via Nuclear Norm Regularization %A Zhenmei Shi %A Yifei Ming %A Ying Fan %A Frederic Sala %A Yingyu Liang %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-shi24a %I PMLR %P 179--201 %U https://proceedings.mlr.press/v234/shi24a.html %V 234 %X The ability to generalize to unseen domains is crucial for machine learning systems deployed in the real world, especially when we only have data from limited training domains. In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization. Intuitively, the proposed regularizer mitigates the impacts of environmental features and encourages learning domain-invariant features. Theoretically, we provide insights into why nuclear norm regularization is more effective compared to ERM and alternative regularization methods. Empirically, we conduct extensive experiments on both synthetic and real datasets. We show nuclear norm regularization achieves strong performance compared to baselines in a wide range of domain generalization tasks. Moreover, our regularizer is broadly applicable with various methods such as ERM and SWAD with consistently improved performance, e.g., 1.7% and 0.9% test accuracy improvements respectively on the DomainBed benchmark.
APA
Shi, Z., Ming, Y., Fan, Y., Sala, F. & Liang, Y.. (2024). Domain Generalization via Nuclear Norm Regularization. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:179-201 Available from https://proceedings.mlr.press/v234/shi24a.html.

Related Material