Inductive Two-Layer Modeling with Parametric Bregman Transfer

Vignesh Ganapathiraman, Zhan Shi, Xinhua Zhang, Yaoliang Yu
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1636-1645, 2018.

Abstract

Latent prediction models, exemplified by multi-layer networks, employ hidden variables that automate abstract feature discovery. They typically pose nonconvex optimization problems and effective semi-definite programming (SDP) relaxations have been developed to enable global solutions (Aslan et al., 2014).However, these models rely on nonparametric training of layer-wise kernel representations, and are therefore restricted to transductive learning which slows down test prediction. In this paper, we develop a new inductive learning framework for parametric transfer functions using matching losses. The result for ReLU utilizes completely positive matrices, and the inductive learner not only delivers superior accuracy but also offers an order of magnitude speedup over SDP with constant approximation guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-ganapathiraman18a, title = {Inductive Two-Layer Modeling with Parametric {B}regman Transfer}, author = {Ganapathiraman, Vignesh and Shi, Zhan and Zhang, Xinhua and Yu, Yaoliang}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1636--1645}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/ganapathiraman18a/ganapathiraman18a.pdf}, url = {https://proceedings.mlr.press/v80/ganapathiraman18a.html}, abstract = {Latent prediction models, exemplified by multi-layer networks, employ hidden variables that automate abstract feature discovery. They typically pose nonconvex optimization problems and effective semi-definite programming (SDP) relaxations have been developed to enable global solutions (Aslan et al., 2014).However, these models rely on nonparametric training of layer-wise kernel representations, and are therefore restricted to transductive learning which slows down test prediction. In this paper, we develop a new inductive learning framework for parametric transfer functions using matching losses. The result for ReLU utilizes completely positive matrices, and the inductive learner not only delivers superior accuracy but also offers an order of magnitude speedup over SDP with constant approximation guarantees.} }
Endnote
%0 Conference Paper %T Inductive Two-Layer Modeling with Parametric Bregman Transfer %A Vignesh Ganapathiraman %A Zhan Shi %A Xinhua Zhang %A Yaoliang Yu %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-ganapathiraman18a %I PMLR %P 1636--1645 %U https://proceedings.mlr.press/v80/ganapathiraman18a.html %V 80 %X Latent prediction models, exemplified by multi-layer networks, employ hidden variables that automate abstract feature discovery. They typically pose nonconvex optimization problems and effective semi-definite programming (SDP) relaxations have been developed to enable global solutions (Aslan et al., 2014).However, these models rely on nonparametric training of layer-wise kernel representations, and are therefore restricted to transductive learning which slows down test prediction. In this paper, we develop a new inductive learning framework for parametric transfer functions using matching losses. The result for ReLU utilizes completely positive matrices, and the inductive learner not only delivers superior accuracy but also offers an order of magnitude speedup over SDP with constant approximation guarantees.
APA
Ganapathiraman, V., Shi, Z., Zhang, X. & Yu, Y.. (2018). Inductive Two-Layer Modeling with Parametric Bregman Transfer. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1636-1645 Available from https://proceedings.mlr.press/v80/ganapathiraman18a.html.

Related Material