PseudoBayesian Learning with Kernel Fourier Transform as Prior
[edit]
Proceedings of Machine Learning Research, PMLR 89:768776, 2019.
Abstract
We revisit Rahimi and Recht (2007)’s kernel random Fourier features (RFF) method through the lens of the PACBayesian theory. While the primary goal of RFF is to approximate a kernel, we look at the Fourier transform as a prior distribution over trigonometric hypotheses. It naturally suggests learning a posterior on these hypotheses. We derive generalization bounds that are optimized by learning a pseudoposterior obtained from a closedform expression. Based on this study, we consider two learning strategies: The first one finds a compact landmarksbased representation of the data where each landmark is given by a distributiontailored similarity measure, while the second one provides a PACBayesian justification to the kernel alignment method of Sinha and Duchi (2016).
Related Material


