Feature-Critic Networks for Heterogeneous Domain Generalization

Yiying Li, Yongxin Yang, Wei Zhou, Timothy Hospedales
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3915-3924, 2019.

Abstract

The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-li19l, title = {Feature-Critic Networks for Heterogeneous Domain Generalization}, author = {Li, Yiying and Yang, Yongxin and Zhou, Wei and Hospedales, Timothy}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3915--3924}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/li19l/li19l.pdf}, url = {https://proceedings.mlr.press/v97/li19l.html}, abstract = {The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.} }
Endnote
%0 Conference Paper %T Feature-Critic Networks for Heterogeneous Domain Generalization %A Yiying Li %A Yongxin Yang %A Wei Zhou %A Timothy Hospedales %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-li19l %I PMLR %P 3915--3924 %U https://proceedings.mlr.press/v97/li19l.html %V 97 %X The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
APA
Li, Y., Yang, Y., Zhou, W. & Hospedales, T.. (2019). Feature-Critic Networks for Heterogeneous Domain Generalization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3915-3924 Available from https://proceedings.mlr.press/v97/li19l.html.

Related Material