Learning Priors for Invariance
[edit]
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics, PMLR 84:366375, 2018.
Abstract
Informative priors are often difficult, if not impossible, to elicit for modern largescale Bayesian models. Yet, often, some prior knowledge is known, and this information is incorporated via engineering tricks or methods less principled than a Bayesian prior. However, employing these tricks is difficult to reconcile with principled probabilistic inference. For instance, in the case of data set augmentation, the posterior is conditioned on artificial data and not on what is actually observed. In this paper, we address the problem of how to specify an informative prior when the problem of interest is known to exhibit invariance properties. The proposed method is akin to posterior variational inference: we choose a parametric family and optimize to find the member of the family that makes the model robust to a given transformation. We demonstrate the method’s utility for dropout and rotation transformations, showing that the use of these priors results in performance competitive to that of nonBayesian methods. Furthermore, our approach does not depend on the data being labeled and thus can be used in semisupervised settings.
Related Material


