RedEx: Beyond Fixed Representation Methods via Convex Optimization

Amit Daniely, Mariano Schain, Gilad Yehudai
Proceedings of The 35th International Conference on Algorithmic Learning Theory, PMLR 237:518-543, 2024.

Abstract

Optimizing Neural networks is a difficult task which is still not well understood. On the other hand, fixed representation methods such as kernels and random features have provable optimization guarantees but inferior performance due to their inherent inability to learn the representations. In this paper, we aim at bridging this gap by presenting a novel architecture called RedEx (Reduced Expander Extractor) that is as expressive as neural networks and can also be trained in a layer-wise fashion via a convex program with semi-definite constraints and optimization guarantees. We also show that RedEx provably surpasses fixed representation methods, in the sense that it can efficiently learn a family of target functions which fixed representation methods cannot.

Cite this Paper


BibTeX
@InProceedings{pmlr-v237-daniely24b, title = {RedEx: Beyond Fixed Representation Methods via Convex Optimization}, author = {Daniely, Amit and Schain, Mariano and Yehudai, Gilad}, booktitle = {Proceedings of The 35th International Conference on Algorithmic Learning Theory}, pages = {518--543}, year = {2024}, editor = {Vernade, Claire and Hsu, Daniel}, volume = {237}, series = {Proceedings of Machine Learning Research}, month = {25--28 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v237/daniely24b/daniely24b.pdf}, url = {https://proceedings.mlr.press/v237/daniely24b.html}, abstract = {Optimizing Neural networks is a difficult task which is still not well understood. On the other hand, fixed representation methods such as kernels and random features have provable optimization guarantees but inferior performance due to their inherent inability to learn the representations. In this paper, we aim at bridging this gap by presenting a novel architecture called RedEx (Reduced Expander Extractor) that is as expressive as neural networks and can also be trained in a layer-wise fashion via a convex program with semi-definite constraints and optimization guarantees. We also show that RedEx provably surpasses fixed representation methods, in the sense that it can efficiently learn a family of target functions which fixed representation methods cannot.} }
Endnote
%0 Conference Paper %T RedEx: Beyond Fixed Representation Methods via Convex Optimization %A Amit Daniely %A Mariano Schain %A Gilad Yehudai %B Proceedings of The 35th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Claire Vernade %E Daniel Hsu %F pmlr-v237-daniely24b %I PMLR %P 518--543 %U https://proceedings.mlr.press/v237/daniely24b.html %V 237 %X Optimizing Neural networks is a difficult task which is still not well understood. On the other hand, fixed representation methods such as kernels and random features have provable optimization guarantees but inferior performance due to their inherent inability to learn the representations. In this paper, we aim at bridging this gap by presenting a novel architecture called RedEx (Reduced Expander Extractor) that is as expressive as neural networks and can also be trained in a layer-wise fashion via a convex program with semi-definite constraints and optimization guarantees. We also show that RedEx provably surpasses fixed representation methods, in the sense that it can efficiently learn a family of target functions which fixed representation methods cannot.
APA
Daniely, A., Schain, M. & Yehudai, G.. (2024). RedEx: Beyond Fixed Representation Methods via Convex Optimization. Proceedings of The 35th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 237:518-543 Available from https://proceedings.mlr.press/v237/daniely24b.html.

Related Material