Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization

Yuen Chen, Haozhe Si, Guojun Zhang, Han Zhao
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:705-736, 2025.

Abstract

Domain generalization (DG) seeks to develop models that generalize well to unseen target domains, addressing distribution shifts in real-world applications. One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization. However, existing methods are computationally inefficient and the underlying principles of these approaches are not well understood. In this paper, we develop a theory of moment alignment for DG. Grounded in transfer measures, a principled framework for quantifying generalizability between domains, we prove that aligning derivatives across domains improves transfer measures. Moment alignment provides a unifying understanding of Invariant Risk Minimization, gradient matching, and Hessian matching, three previously disconnected approaches. We further establish the duality between feature moments and derivatives of the classifier head. Building upon our theory, we introduce Closed-Form Moment Alignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form. Our method overcomes the computational inefficiencies of existing gradient and Hessian-based techniques by eliminating the need for repeated backpropagation or sampling-based Hessian estimation. We validate our theory and algorithm through quantitative and qualitative experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-chen25f, title = {Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization}, author = {Chen, Yuen and Si, Haozhe and Zhang, Guojun and Zhao, Han}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {705--736}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/chen25f/chen25f.pdf}, url = {https://proceedings.mlr.press/v286/chen25f.html}, abstract = {Domain generalization (DG) seeks to develop models that generalize well to unseen target domains, addressing distribution shifts in real-world applications. One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization. However, existing methods are computationally inefficient and the underlying principles of these approaches are not well understood. In this paper, we develop a theory of moment alignment for DG. Grounded in transfer measures, a principled framework for quantifying generalizability between domains, we prove that aligning derivatives across domains improves transfer measures. Moment alignment provides a unifying understanding of Invariant Risk Minimization, gradient matching, and Hessian matching, three previously disconnected approaches. We further establish the duality between feature moments and derivatives of the classifier head. Building upon our theory, we introduce Closed-Form Moment Alignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form. Our method overcomes the computational inefficiencies of existing gradient and Hessian-based techniques by eliminating the need for repeated backpropagation or sampling-based Hessian estimation. We validate our theory and algorithm through quantitative and qualitative experiments.} }
Endnote
%0 Conference Paper %T Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization %A Yuen Chen %A Haozhe Si %A Guojun Zhang %A Han Zhao %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-chen25f %I PMLR %P 705--736 %U https://proceedings.mlr.press/v286/chen25f.html %V 286 %X Domain generalization (DG) seeks to develop models that generalize well to unseen target domains, addressing distribution shifts in real-world applications. One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization. However, existing methods are computationally inefficient and the underlying principles of these approaches are not well understood. In this paper, we develop a theory of moment alignment for DG. Grounded in transfer measures, a principled framework for quantifying generalizability between domains, we prove that aligning derivatives across domains improves transfer measures. Moment alignment provides a unifying understanding of Invariant Risk Minimization, gradient matching, and Hessian matching, three previously disconnected approaches. We further establish the duality between feature moments and derivatives of the classifier head. Building upon our theory, we introduce Closed-Form Moment Alignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form. Our method overcomes the computational inefficiencies of existing gradient and Hessian-based techniques by eliminating the need for repeated backpropagation or sampling-based Hessian estimation. We validate our theory and algorithm through quantitative and qualitative experiments.
APA
Chen, Y., Si, H., Zhang, G. & Zhao, H.. (2025). Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:705-736 Available from https://proceedings.mlr.press/v286/chen25f.html.

Related Material