[edit]
Learning Invariant Representations with Kernel Warping
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:1003-1012, 2019.
Abstract
Invariance is an effective prior that has been extensively used to bias supervised learning with a \emph{given} representation of data. In order to learn invariant representations, wavelet and scattering based methods “hard code” invariance over the \emph{entire} sample space, hence restricted to a limited range of transformations. Kernels based on Haar integration also work only on a \emph{group} of transformations. In this work, we break this limitation by designing a new representation learning algorithm that incorporates invariances \emph{beyond transformation}. Our approach, which is based on warping the kernel in a data-dependent fashion, is computationally efficient using random features, and leads to a deep kernel through multiple layers. We apply it to convolutional kernel networks and demonstrate its stability.