[edit]
Adversarial Interpretation of Bayesian Inference
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:553-572, 2022.
Abstract
We build on the optimization-centric view on Bayesian inference advocated by Knoblauch et al. (2019). Thinking about Bayesian and generalized Bayesian posteriors as the solutions to a regularized minimization problem allows us to answer an intriguing question: If minimization is the primal problem, then what is its dual? By deriving the Fenchel dual of the problem, we demonstrate that this dual corresponds to an adversarial game: In the dual space, the prior becomes the cost function for an adversary that seeks to perturb the likelihood [loss] function targeted by standard [generalized] Bayesian inference. This implies that Bayes-like procedures are adversarially robust—providing another firm theoretical foundation for their empirical performance. Our contributions are foundational, and apply to a wide-ranging set of Machine Learning methods. This includes standard Bayesian inference, generalized Bayesian and Gibbs posteriors (Bissiri et al., 2016), as well as a diverse set of other methods including Generalized Variational Inference (Knoblauch et al., 2019) and the Wasserstein Autoencoder (Tolstikhin et al., 2017).