Improper Deep Kernels

Uri Heinemann, Roi Livni, Elad Eban, Gal Elidan, Amir Globerson
; Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1159-1167, 2016.

Abstract

Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non convex optimization problem. Here we address this difficulty by turning to "improper learning" of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel which integrates over the set of possible neural models. It turns out that the corresponding integral can be evaluated in closed form via a simple recursion. The learning problem is then an SVM with this kernel, and a global optimum can thus be found efficiently. We also provide sample complexity results which depend on the stability of the optimal neural net.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-heinemann16, title = {Improper Deep Kernels}, author = {Uri Heinemann and Roi Livni and Elad Eban and Gal Elidan and Amir Globerson}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1159--1167}, year = {2016}, editor = {Arthur Gretton and Christian C. Robert}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/heinemann16.pdf}, url = {http://proceedings.mlr.press/v51/heinemann16.html}, abstract = {Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non convex optimization problem. Here we address this difficulty by turning to "improper learning" of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel which integrates over the set of possible neural models. It turns out that the corresponding integral can be evaluated in closed form via a simple recursion. The learning problem is then an SVM with this kernel, and a global optimum can thus be found efficiently. We also provide sample complexity results which depend on the stability of the optimal neural net.} }
Endnote
%0 Conference Paper %T Improper Deep Kernels %A Uri Heinemann %A Roi Livni %A Elad Eban %A Gal Elidan %A Amir Globerson %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-heinemann16 %I PMLR %J Proceedings of Machine Learning Research %P 1159--1167 %U http://proceedings.mlr.press %V 51 %W PMLR %X Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non convex optimization problem. Here we address this difficulty by turning to "improper learning" of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel which integrates over the set of possible neural models. It turns out that the corresponding integral can be evaluated in closed form via a simple recursion. The learning problem is then an SVM with this kernel, and a global optimum can thus be found efficiently. We also provide sample complexity results which depend on the stability of the optimal neural net.
RIS
TY - CPAPER TI - Improper Deep Kernels AU - Uri Heinemann AU - Roi Livni AU - Elad Eban AU - Gal Elidan AU - Amir Globerson BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics PY - 2016/05/02 DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-heinemann16 PB - PMLR SP - 1159 DP - PMLR EP - 1167 L1 - http://proceedings.mlr.press/v51/heinemann16.pdf UR - http://proceedings.mlr.press/v51/heinemann16.html AB - Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non convex optimization problem. Here we address this difficulty by turning to "improper learning" of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel which integrates over the set of possible neural models. It turns out that the corresponding integral can be evaluated in closed form via a simple recursion. The learning problem is then an SVM with this kernel, and a global optimum can thus be found efficiently. We also provide sample complexity results which depend on the stability of the optimal neural net. ER -
APA
Heinemann, U., Livni, R., Eban, E., Elidan, G. & Globerson, A.. (2016). Improper Deep Kernels. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in PMLR 51:1159-1167

Related Material