Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels

Tamim El Ahmad, Luc Brogat-Motte, Pierre Laforgue, Florence d’Alché-Buc
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:109-117, 2024.

Abstract

Leveraging the kernel trick in both the input and output spaces, surrogate kernel methods are a flexible and theoretically grounded solution to structured output prediction. If they provide state-of-the-art performance on complex data sets of moderate size (e.g., in chemoinformatics), these approaches however fail to scale. We propose to equip surrogate kernel methods with sketching-based approximations, applied to both the input and output feature maps. We prove excess risk bounds on the original structured prediction problem, showing how to attain close-to-optimal rates with a reduced sketch size that depends on the eigendecay of the input/output covariance operators. From a computational perspective, we show that the two approximations have distinct but complementary impacts: sketching the input kernel mostly reduces training time, while sketching the output kernel decreases the inference time. Empirically, our approach is shown to scale, achieving state-of-the-art performance on benchmark data sets where non-sketched methods are intractable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-el-ahmad24a, title = {Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels}, author = {El Ahmad, Tamim and Brogat-Motte, Luc and Laforgue, Pierre and d'Alch\'{e}-Buc, Florence}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {109--117}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/el-ahmad24a/el-ahmad24a.pdf}, url = {https://proceedings.mlr.press/v238/el-ahmad24a.html}, abstract = {Leveraging the kernel trick in both the input and output spaces, surrogate kernel methods are a flexible and theoretically grounded solution to structured output prediction. If they provide state-of-the-art performance on complex data sets of moderate size (e.g., in chemoinformatics), these approaches however fail to scale. We propose to equip surrogate kernel methods with sketching-based approximations, applied to both the input and output feature maps. We prove excess risk bounds on the original structured prediction problem, showing how to attain close-to-optimal rates with a reduced sketch size that depends on the eigendecay of the input/output covariance operators. From a computational perspective, we show that the two approximations have distinct but complementary impacts: sketching the input kernel mostly reduces training time, while sketching the output kernel decreases the inference time. Empirically, our approach is shown to scale, achieving state-of-the-art performance on benchmark data sets where non-sketched methods are intractable.} }
Endnote
%0 Conference Paper %T Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels %A Tamim El Ahmad %A Luc Brogat-Motte %A Pierre Laforgue %A Florence d’Alché-Buc %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-el-ahmad24a %I PMLR %P 109--117 %U https://proceedings.mlr.press/v238/el-ahmad24a.html %V 238 %X Leveraging the kernel trick in both the input and output spaces, surrogate kernel methods are a flexible and theoretically grounded solution to structured output prediction. If they provide state-of-the-art performance on complex data sets of moderate size (e.g., in chemoinformatics), these approaches however fail to scale. We propose to equip surrogate kernel methods with sketching-based approximations, applied to both the input and output feature maps. We prove excess risk bounds on the original structured prediction problem, showing how to attain close-to-optimal rates with a reduced sketch size that depends on the eigendecay of the input/output covariance operators. From a computational perspective, we show that the two approximations have distinct but complementary impacts: sketching the input kernel mostly reduces training time, while sketching the output kernel decreases the inference time. Empirically, our approach is shown to scale, achieving state-of-the-art performance on benchmark data sets where non-sketched methods are intractable.
APA
El Ahmad, T., Brogat-Motte, L., Laforgue, P. & d’Alché-Buc, F.. (2024). Sketch In, Sketch Out: Accelerating both Learning and Inference for Structured Prediction with Kernels. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:109-117 Available from https://proceedings.mlr.press/v238/el-ahmad24a.html.

Related Material