[edit]
Generalization and Memorization: The Bias Potential Model
Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, PMLR 145:1013-1043, 2022.
Abstract
Models for learning probability distributions such as generative models and density estimators be- have quite differently from models for learning functions. One example is found in the memo- rization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks (GANs). For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimension- independent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.