Rapid Model Comparison by Amortizing Across Models

[edit]

Lily H. Zhang, Michael C. Hughes ;
Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, PMLR 118:1-11, 2020.

Abstract

Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima. To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture. Our Any Parameter Encoder (APE) extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input. APE thus reduces posterior inference across unseen data and models to a single forward pass. In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion.

Related Material