Sample-based approximate regularization

[edit]

Philip Bachman, Amir-Massoud Farahmand, Doina Precup ;
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1926-1934, 2014.

Abstract

We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.

Related Material