An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization

Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2641-2657, 2022.

Abstract

A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them—this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter task is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test distributions. Under an existing notion of inter- and extrapolation based on reweighting of sub-group likelihoods, we rigorously demonstrate that extrapolation is computationally much harder than interpolation, though their statistical complexity is not significantly different. Furthermore, we show that ERM—possibly with added structured noise—is provably minimax-optimal for both tasks. Our framework presents a new avenue for the formal analysis of domain generalization algorithms which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-rosenfeld22a, title = { An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization }, author = {Rosenfeld, Elan and Ravikumar, Pradeep and Risteski, Andrej}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2641--2657}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/rosenfeld22a/rosenfeld22a.pdf}, url = {https://proceedings.mlr.press/v151/rosenfeld22a.html}, abstract = { A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them—this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter task is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test distributions. Under an existing notion of inter- and extrapolation based on reweighting of sub-group likelihoods, we rigorously demonstrate that extrapolation is computationally much harder than interpolation, though their statistical complexity is not significantly different. Furthermore, we show that ERM—possibly with added structured noise—is provably minimax-optimal for both tasks. Our framework presents a new avenue for the formal analysis of domain generalization algorithms which may be of independent interest. } }
Endnote
%0 Conference Paper %T An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization %A Elan Rosenfeld %A Pradeep Ravikumar %A Andrej Risteski %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-rosenfeld22a %I PMLR %P 2641--2657 %U https://proceedings.mlr.press/v151/rosenfeld22a.html %V 151 %X A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them—this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter task is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test distributions. Under an existing notion of inter- and extrapolation based on reweighting of sub-group likelihoods, we rigorously demonstrate that extrapolation is computationally much harder than interpolation, though their statistical complexity is not significantly different. Furthermore, we show that ERM—possibly with added structured noise—is provably minimax-optimal for both tasks. Our framework presents a new avenue for the formal analysis of domain generalization algorithms which may be of independent interest.
APA
Rosenfeld, E., Ravikumar, P. & Risteski, A.. (2022). An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2641-2657 Available from https://proceedings.mlr.press/v151/rosenfeld22a.html.

Related Material