[edit]
Bayesian leave-one-out cross-validation for large data
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4244-4253, 2019.
Abstract
Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO) is a general approach for assessing the generalizability of a model, but unfortunately, LOO does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO model evaluation for large datasets. We provide both theoretical and empirical results showing good properties for large data.