PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model

Taiji Suzuki
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:8.1-8.20, 2012.

Abstract

We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that the Bayesian variant of MKL achieves the optimal convergence rate without such strong conditions on the design. Basically our approach is a combination of PAC-Bayes and recently developed theories of non-parametric Gaussian process regressions. Our bound is developed in a fixed design situation. Our analysis includes the existing result of Gaussian process as a special case and the proof is much simpler by virtue of PAC-Bayesian technique. We also give the convergence rate of the Bayesian variant of Group Lasso as a finite dimensional special case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v23-suzuki12, title = {PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model}, author = {Suzuki, Taiji}, booktitle = {Proceedings of the 25th Annual Conference on Learning Theory}, pages = {8.1--8.20}, year = {2012}, editor = {Mannor, Shie and Srebro, Nathan and Williamson, Robert C.}, volume = {23}, series = {Proceedings of Machine Learning Research}, address = {Edinburgh, Scotland}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v23/suzuki12/suzuki12.pdf}, url = {https://proceedings.mlr.press/v23/suzuki12.html}, abstract = {We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that the Bayesian variant of MKL achieves the optimal convergence rate without such strong conditions on the design. Basically our approach is a combination of PAC-Bayes and recently developed theories of non-parametric Gaussian process regressions. Our bound is developed in a fixed design situation. Our analysis includes the existing result of Gaussian process as a special case and the proof is much simpler by virtue of PAC-Bayesian technique. We also give the convergence rate of the Bayesian variant of Group Lasso as a finite dimensional special case.} }
Endnote
%0 Conference Paper %T PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model %A Taiji Suzuki %B Proceedings of the 25th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2012 %E Shie Mannor %E Nathan Srebro %E Robert C. Williamson %F pmlr-v23-suzuki12 %I PMLR %P 8.1--8.20 %U https://proceedings.mlr.press/v23/suzuki12.html %V 23 %X We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that the Bayesian variant of MKL achieves the optimal convergence rate without such strong conditions on the design. Basically our approach is a combination of PAC-Bayes and recently developed theories of non-parametric Gaussian process regressions. Our bound is developed in a fixed design situation. Our analysis includes the existing result of Gaussian process as a special case and the proof is much simpler by virtue of PAC-Bayesian technique. We also give the convergence rate of the Bayesian variant of Group Lasso as a finite dimensional special case.
RIS
TY - CPAPER TI - PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model AU - Taiji Suzuki BT - Proceedings of the 25th Annual Conference on Learning Theory DA - 2012/06/16 ED - Shie Mannor ED - Nathan Srebro ED - Robert C. Williamson ID - pmlr-v23-suzuki12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 23 SP - 8.1 EP - 8.20 L1 - http://proceedings.mlr.press/v23/suzuki12/suzuki12.pdf UR - https://proceedings.mlr.press/v23/suzuki12.html AB - We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that the Bayesian variant of MKL achieves the optimal convergence rate without such strong conditions on the design. Basically our approach is a combination of PAC-Bayes and recently developed theories of non-parametric Gaussian process regressions. Our bound is developed in a fixed design situation. Our analysis includes the existing result of Gaussian process as a special case and the proof is much simpler by virtue of PAC-Bayesian technique. We also give the convergence rate of the Bayesian variant of Group Lasso as a finite dimensional special case. ER -
APA
Suzuki, T.. (2012). PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model. Proceedings of the 25th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 23:8.1-8.20 Available from https://proceedings.mlr.press/v23/suzuki12.html.

Related Material