Detecting Dependencies in Sparse, Multivariate Databases Using Probabilistic Programming and Nonparametric Bayes
[edit]
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:632641, 2017.
Abstract
Datasets with hundreds of variables and many missing values are commonplace. In this setting, it is both statistically and computationally challenging to detect true predictive relationships between variables and also to suppress false positives. This paper proposes an approach that combines probabilistic programming, information theory, and nonparametric Bayes. It shows how to use Bayesian nonparametric modeling to (i) build an ensemble of joint probability models for all the variables; (ii) efficiently detect marginal independencies; and (iii) estimate the conditional mutual information between arbitrary subsets of variables, subject to a broad class of constraints. Users can access these capabilities using BayesDB, a probabilistic programming platform for probabilistic data analysis, by writing queries in a simple, SQLlike language. This paper demonstrates empirically that the method can (i) detect contextspecific (in)dependencies on challenging synthetic problems and (ii) yield improved sensitivity and specificity over baselines from statistics and machine learning, on a realworld database of over 300 sparsely observed indicators of macroeconomic development and public health.
Related Material


