A Closer Look at Smoothness in Domain Adversarial Training

Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, Venkatesh Babu Radhakrishnan
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18378-18399, 2022.

Abstract

Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times, methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of task loss (eg. classification, regression etc.) and adversarial terms. We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain. In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain. Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our analysis also provides insight into the extensive usage of SGD over Adam in the community for domain adversarial training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-rangwani22a, title = {A Closer Look at Smoothness in Domain Adversarial Training}, author = {Rangwani, Harsh and Aithal, Sumukh K and Mishra, Mayank and Jain, Arihant and Radhakrishnan, Venkatesh Babu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18378--18399}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/rangwani22a/rangwani22a.pdf}, url = {https://proceedings.mlr.press/v162/rangwani22a.html}, abstract = {Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times, methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of task loss (eg. classification, regression etc.) and adversarial terms. We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain. In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain. Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our analysis also provides insight into the extensive usage of SGD over Adam in the community for domain adversarial training.} }
Endnote
%0 Conference Paper %T A Closer Look at Smoothness in Domain Adversarial Training %A Harsh Rangwani %A Sumukh K Aithal %A Mayank Mishra %A Arihant Jain %A Venkatesh Babu Radhakrishnan %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-rangwani22a %I PMLR %P 18378--18399 %U https://proceedings.mlr.press/v162/rangwani22a.html %V 162 %X Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times, methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of task loss (eg. classification, regression etc.) and adversarial terms. We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain. In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain. Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our analysis also provides insight into the extensive usage of SGD over Adam in the community for domain adversarial training.
APA
Rangwani, H., Aithal, S.K., Mishra, M., Jain, A. & Radhakrishnan, V.B.. (2022). A Closer Look at Smoothness in Domain Adversarial Training. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18378-18399 Available from https://proceedings.mlr.press/v162/rangwani22a.html.

Related Material