Sample-based approximate regularization

Philip Bachman, Amir-Massoud Farahmand, Doina Precup
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1926-1934, 2014.

Abstract

We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-bachman14, title = {Sample-based approximate regularization}, author = {Philip Bachman and Amir-Massoud Farahmand and Doina Precup}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1926--1934}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/bachman14.pdf}, url = {http://proceedings.mlr.press/v32/bachman14.html}, abstract = {We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.} }
Endnote
%0 Conference Paper %T Sample-based approximate regularization %A Philip Bachman %A Amir-Massoud Farahmand %A Doina Precup %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-bachman14 %I PMLR %J Proceedings of Machine Learning Research %P 1926--1934 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.
RIS
TY - CPAPER TI - Sample-based approximate regularization AU - Philip Bachman AU - Amir-Massoud Farahmand AU - Doina Precup BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-bachman14 PB - PMLR SP - 1926 DP - PMLR EP - 1934 L1 - http://proceedings.mlr.press/v32/bachman14.pdf UR - http://proceedings.mlr.press/v32/bachman14.html AB - We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets. ER -
APA
Bachman, P., Farahmand, A. & Precup, D.. (2014). Sample-based approximate regularization. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):1926-1934

Related Material