Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior

Gaël Letarte, Emilie Morvant, Pascal Germain
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:768-776, 2019.

Abstract

We revisit Rahimi and Recht (2007)’s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory. While the primary goal of RFF is to approximate a kernel, we look at the Fourier transform as a prior distribution over trigonometric hypotheses. It naturally suggests learning a posterior on these hypotheses. We derive generalization bounds that are optimized by learning a pseudo-posterior obtained from a closed-form expression. Based on this study, we consider two learning strategies: The first one finds a compact landmarks-based representation of the data where each landmark is given by a distribution-tailored similarity measure, while the second one provides a PAC-Bayesian justification to the kernel alignment method of Sinha and Duchi (2016).

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-letarte19a, title = {Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior}, author = {Letarte, Ga\"{e}l and Morvant, Emilie and Germain, Pascal}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {768--776}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/letarte19a/letarte19a.pdf}, url = {https://proceedings.mlr.press/v89/letarte19a.html}, abstract = {We revisit Rahimi and Recht (2007)’s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory. While the primary goal of RFF is to approximate a kernel, we look at the Fourier transform as a prior distribution over trigonometric hypotheses. It naturally suggests learning a posterior on these hypotheses. We derive generalization bounds that are optimized by learning a pseudo-posterior obtained from a closed-form expression. Based on this study, we consider two learning strategies: The first one finds a compact landmarks-based representation of the data where each landmark is given by a distribution-tailored similarity measure, while the second one provides a PAC-Bayesian justification to the kernel alignment method of Sinha and Duchi (2016).} }
Endnote
%0 Conference Paper %T Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior %A Gaël Letarte %A Emilie Morvant %A Pascal Germain %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-letarte19a %I PMLR %P 768--776 %U https://proceedings.mlr.press/v89/letarte19a.html %V 89 %X We revisit Rahimi and Recht (2007)’s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory. While the primary goal of RFF is to approximate a kernel, we look at the Fourier transform as a prior distribution over trigonometric hypotheses. It naturally suggests learning a posterior on these hypotheses. We derive generalization bounds that are optimized by learning a pseudo-posterior obtained from a closed-form expression. Based on this study, we consider two learning strategies: The first one finds a compact landmarks-based representation of the data where each landmark is given by a distribution-tailored similarity measure, while the second one provides a PAC-Bayesian justification to the kernel alignment method of Sinha and Duchi (2016).
APA
Letarte, G., Morvant, E. & Germain, P.. (2019). Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:768-776 Available from https://proceedings.mlr.press/v89/letarte19a.html.

Related Material