Balancing Discriminability and Transferability for Source-Free Domain Adaptation

Jogendra Nath Kundu, Akshay R Kulkarni, Suvaansh Bhambri, Deepesh Mehta, Shreyas Anand Kulkarni, Varun Jampani, Venkatesh Babu Radhakrishnan
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11710-11728, 2022.

Abstract

Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting. The trivial solution of realizing an effective original to generic domain mapping improves transferability but degrades task discriminability. Upon analyzing the hurdles from both theoretical and empirical standpoints, we derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off while duly respecting the privacy-oriented source-free setting. A simple but effective realization of the proposed insights on top of the existing source-free DA approaches yields state-of-the-art performance with faster convergence. Beyond single-source, we also outperform multi-source prior-arts across both classification and semantic segmentation benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kundu22a, title = {Balancing Discriminability and Transferability for Source-Free Domain Adaptation}, author = {Kundu, Jogendra Nath and Kulkarni, Akshay R and Bhambri, Suvaansh and Mehta, Deepesh and Kulkarni, Shreyas Anand and Jampani, Varun and Radhakrishnan, Venkatesh Babu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11710--11728}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kundu22a/kundu22a.pdf}, url = {https://proceedings.mlr.press/v162/kundu22a.html}, abstract = {Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting. The trivial solution of realizing an effective original to generic domain mapping improves transferability but degrades task discriminability. Upon analyzing the hurdles from both theoretical and empirical standpoints, we derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off while duly respecting the privacy-oriented source-free setting. A simple but effective realization of the proposed insights on top of the existing source-free DA approaches yields state-of-the-art performance with faster convergence. Beyond single-source, we also outperform multi-source prior-arts across both classification and semantic segmentation benchmarks.} }
Endnote
%0 Conference Paper %T Balancing Discriminability and Transferability for Source-Free Domain Adaptation %A Jogendra Nath Kundu %A Akshay R Kulkarni %A Suvaansh Bhambri %A Deepesh Mehta %A Shreyas Anand Kulkarni %A Varun Jampani %A Venkatesh Babu Radhakrishnan %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kundu22a %I PMLR %P 11710--11728 %U https://proceedings.mlr.press/v162/kundu22a.html %V 162 %X Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting. The trivial solution of realizing an effective original to generic domain mapping improves transferability but degrades task discriminability. Upon analyzing the hurdles from both theoretical and empirical standpoints, we derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off while duly respecting the privacy-oriented source-free setting. A simple but effective realization of the proposed insights on top of the existing source-free DA approaches yields state-of-the-art performance with faster convergence. Beyond single-source, we also outperform multi-source prior-arts across both classification and semantic segmentation benchmarks.
APA
Kundu, J.N., Kulkarni, A.R., Bhambri, S., Mehta, D., Kulkarni, S.A., Jampani, V. & Radhakrishnan, V.B.. (2022). Balancing Discriminability and Transferability for Source-Free Domain Adaptation. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11710-11728 Available from https://proceedings.mlr.press/v162/kundu22a.html.

Related Material