Enhancing Distributional Stability among Sub-populations

Jiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2125-2133, 2024.

Abstract

Enhancing the stability of machine learning algorithms under distributional shifts is at the heart of the Out-of-Distribution (OOD) Generalization problem. Derived from causal learning, recent works of invariant learning pursue strict invariance with multiple training environments. Although intuitively reasonable, strong assumptions on the availability and quality of environments are made to learn the strict invariance property. In this work, we come up with the “distributional stability" notion to mitigate such limitations. It quantifies the stability of prediction mechanisms among sub-populations down to a prescribed scale. Based on this, we propose the learnability assumption and derive the generalization error bound under distribution shifts. Inspired by theoretical analyses, we propose our novel stable risk minimization (SRM) algorithm to enhance the model’s stability w.r.t. shifts in prediction mechanisms (Y|X-shifts). Experimental results are consistent with our intuition and validate the effectiveness of our algorithm. The code can be found at https://github.com/LJSthu/SRM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-liu24c, title = { Enhancing Distributional Stability among Sub-populations }, author = {Liu, Jiashuo and Wu, Jiayun and Peng, Jie and Wu, Xiaoyu and Zheng, Yang and Li, Bo and Cui, Peng}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2125--2133}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/liu24c/liu24c.pdf}, url = {https://proceedings.mlr.press/v238/liu24c.html}, abstract = { Enhancing the stability of machine learning algorithms under distributional shifts is at the heart of the Out-of-Distribution (OOD) Generalization problem. Derived from causal learning, recent works of invariant learning pursue strict invariance with multiple training environments. Although intuitively reasonable, strong assumptions on the availability and quality of environments are made to learn the strict invariance property. In this work, we come up with the “distributional stability" notion to mitigate such limitations. It quantifies the stability of prediction mechanisms among sub-populations down to a prescribed scale. Based on this, we propose the learnability assumption and derive the generalization error bound under distribution shifts. Inspired by theoretical analyses, we propose our novel stable risk minimization (SRM) algorithm to enhance the model’s stability w.r.t. shifts in prediction mechanisms (Y|X-shifts). Experimental results are consistent with our intuition and validate the effectiveness of our algorithm. The code can be found at https://github.com/LJSthu/SRM. } }
Endnote
%0 Conference Paper %T Enhancing Distributional Stability among Sub-populations %A Jiashuo Liu %A Jiayun Wu %A Jie Peng %A Xiaoyu Wu %A Yang Zheng %A Bo Li %A Peng Cui %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-liu24c %I PMLR %P 2125--2133 %U https://proceedings.mlr.press/v238/liu24c.html %V 238 %X Enhancing the stability of machine learning algorithms under distributional shifts is at the heart of the Out-of-Distribution (OOD) Generalization problem. Derived from causal learning, recent works of invariant learning pursue strict invariance with multiple training environments. Although intuitively reasonable, strong assumptions on the availability and quality of environments are made to learn the strict invariance property. In this work, we come up with the “distributional stability" notion to mitigate such limitations. It quantifies the stability of prediction mechanisms among sub-populations down to a prescribed scale. Based on this, we propose the learnability assumption and derive the generalization error bound under distribution shifts. Inspired by theoretical analyses, we propose our novel stable risk minimization (SRM) algorithm to enhance the model’s stability w.r.t. shifts in prediction mechanisms (Y|X-shifts). Experimental results are consistent with our intuition and validate the effectiveness of our algorithm. The code can be found at https://github.com/LJSthu/SRM.
APA
Liu, J., Wu, J., Peng, J., Wu, X., Zheng, Y., Li, B. & Cui, P.. (2024). Enhancing Distributional Stability among Sub-populations . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2125-2133 Available from https://proceedings.mlr.press/v238/liu24c.html.

Related Material