Model Assessment and Selection under Temporal Distribution Shift

Elise Han, Chengpiao Huang, Kaizheng Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:17374-17392, 2024.

Abstract

We investigate model assessment and selection in a changing environment, by synthesizing datasets from both the current time period and historical epochs. To tackle unknown and potentially arbitrary temporal distribution shift, we develop an adaptive rolling window approach to estimate the generalization error of a given model. This strategy also facilitates the comparison between any two candidate models by estimating the difference of their generalization errors. We further integrate pairwise comparisons into a single-elimination tournament, achieving near-optimal model selection from a collection of candidates. Theoretical analyses and empirical experiments underscore the adaptivity of our proposed methods to the non-stationarity in data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-han24b, title = {Model Assessment and Selection under Temporal Distribution Shift}, author = {Han, Elise and Huang, Chengpiao and Wang, Kaizheng}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {17374--17392}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/han24b/han24b.pdf}, url = {https://proceedings.mlr.press/v235/han24b.html}, abstract = {We investigate model assessment and selection in a changing environment, by synthesizing datasets from both the current time period and historical epochs. To tackle unknown and potentially arbitrary temporal distribution shift, we develop an adaptive rolling window approach to estimate the generalization error of a given model. This strategy also facilitates the comparison between any two candidate models by estimating the difference of their generalization errors. We further integrate pairwise comparisons into a single-elimination tournament, achieving near-optimal model selection from a collection of candidates. Theoretical analyses and empirical experiments underscore the adaptivity of our proposed methods to the non-stationarity in data.} }
Endnote
%0 Conference Paper %T Model Assessment and Selection under Temporal Distribution Shift %A Elise Han %A Chengpiao Huang %A Kaizheng Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-han24b %I PMLR %P 17374--17392 %U https://proceedings.mlr.press/v235/han24b.html %V 235 %X We investigate model assessment and selection in a changing environment, by synthesizing datasets from both the current time period and historical epochs. To tackle unknown and potentially arbitrary temporal distribution shift, we develop an adaptive rolling window approach to estimate the generalization error of a given model. This strategy also facilitates the comparison between any two candidate models by estimating the difference of their generalization errors. We further integrate pairwise comparisons into a single-elimination tournament, achieving near-optimal model selection from a collection of candidates. Theoretical analyses and empirical experiments underscore the adaptivity of our proposed methods to the non-stationarity in data.
APA
Han, E., Huang, C. & Wang, K.. (2024). Model Assessment and Selection under Temporal Distribution Shift. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:17374-17392 Available from https://proceedings.mlr.press/v235/han24b.html.

Related Material