Learning with Exact Invariances in Polynomial Time

Ashkan Soleymani, Behrooz Tahmasebi, Stefanie Jegelka, Patrick Jaillet
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:56101-56121, 2025.

Abstract

We study the statistical-computational trade-offs for learning with exact invariances (or symmetries) using kernel regression. Traditional methods, such as data augmentation, group averaging, canonicalization, and frame-averaging, either fail to provide a polynomial-time solution or are not applicable in the kernel setting. However, with oracle access to the geometric properties of the input space, we propose a polynomial-time algorithm that learns a classifier with exact invariances. Moreover, our approach achieves the same excess population risk (or generalization error) as the original kernel regression problem. To the best of our knowledge, this is the first polynomial-time algorithm to achieve exact (as opposed to approximate) invariances in this setting. In developing our approach, we also resolve a question recently posed by D{ı}az et al. (2025) on efficient computation of invariant bases and kernels with respect to finite groups, even when the group size is prohibitively large. Our proof leverages tools from differential geometry, spectral theory, and optimization. A key result in our development is a new reformulation of the problem of learning under invariances as optimizing an infinite number of linearly constrained convex quadratic programs, which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-soleymani25a, title = {Learning with Exact Invariances in Polynomial Time}, author = {Soleymani, Ashkan and Tahmasebi, Behrooz and Jegelka, Stefanie and Jaillet, Patrick}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {56101--56121}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/soleymani25a/soleymani25a.pdf}, url = {https://proceedings.mlr.press/v267/soleymani25a.html}, abstract = {We study the statistical-computational trade-offs for learning with exact invariances (or symmetries) using kernel regression. Traditional methods, such as data augmentation, group averaging, canonicalization, and frame-averaging, either fail to provide a polynomial-time solution or are not applicable in the kernel setting. However, with oracle access to the geometric properties of the input space, we propose a polynomial-time algorithm that learns a classifier with exact invariances. Moreover, our approach achieves the same excess population risk (or generalization error) as the original kernel regression problem. To the best of our knowledge, this is the first polynomial-time algorithm to achieve exact (as opposed to approximate) invariances in this setting. In developing our approach, we also resolve a question recently posed by D{ı}az et al. (2025) on efficient computation of invariant bases and kernels with respect to finite groups, even when the group size is prohibitively large. Our proof leverages tools from differential geometry, spectral theory, and optimization. A key result in our development is a new reformulation of the problem of learning under invariances as optimizing an infinite number of linearly constrained convex quadratic programs, which may be of independent interest.} }
Endnote
%0 Conference Paper %T Learning with Exact Invariances in Polynomial Time %A Ashkan Soleymani %A Behrooz Tahmasebi %A Stefanie Jegelka %A Patrick Jaillet %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-soleymani25a %I PMLR %P 56101--56121 %U https://proceedings.mlr.press/v267/soleymani25a.html %V 267 %X We study the statistical-computational trade-offs for learning with exact invariances (or symmetries) using kernel regression. Traditional methods, such as data augmentation, group averaging, canonicalization, and frame-averaging, either fail to provide a polynomial-time solution or are not applicable in the kernel setting. However, with oracle access to the geometric properties of the input space, we propose a polynomial-time algorithm that learns a classifier with exact invariances. Moreover, our approach achieves the same excess population risk (or generalization error) as the original kernel regression problem. To the best of our knowledge, this is the first polynomial-time algorithm to achieve exact (as opposed to approximate) invariances in this setting. In developing our approach, we also resolve a question recently posed by D{ı}az et al. (2025) on efficient computation of invariant bases and kernels with respect to finite groups, even when the group size is prohibitively large. Our proof leverages tools from differential geometry, spectral theory, and optimization. A key result in our development is a new reformulation of the problem of learning under invariances as optimizing an infinite number of linearly constrained convex quadratic programs, which may be of independent interest.
APA
Soleymani, A., Tahmasebi, B., Jegelka, S. & Jaillet, P.. (2025). Learning with Exact Invariances in Polynomial Time. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:56101-56121 Available from https://proceedings.mlr.press/v267/soleymani25a.html.

Related Material