Linear predictor on linearly-generated data with missing values: non consistency and solutions
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3165-3174, 2020.
We consider building predictors when the data have missing values. We study the seemingly-simple case where the target to predict is a linear function of the fully observed data and we show that, in the presence of missing values, the optimal predictor is not linear in general. In the particular Gaussian case, it can be written as a linear function of multiway interactions between the observed data and the various missing value indicators. Due to its intrinsic complexity, we study a simple approximation and prove generalization bounds with finite samples, highlighting regimes for which each method performs best. We then show that multilayer perceptrons with ReLU activation functions can be consistent, and can explore good trade-offs between the true model and approximations. Our study highlights the interesting family of models that are beneficial to fit with missing values depending on the amount of data available.