Scaling the Indian Buffet Process via Submodular Maximization

Colorado Reed, Ghahramani Zoubin
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1013-1021, 2013.

Abstract

Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Wellings (2008)’s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a 1/3-approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-reed13, title = {Scaling the Indian Buffet Process via Submodular Maximization}, author = {Reed, Colorado and Zoubin, Ghahramani}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1013--1021}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/reed13.pdf}, url = {https://proceedings.mlr.press/v28/reed13.html}, abstract = {Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Wellings (2008)’s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a 1/3-approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model. } }
Endnote
%0 Conference Paper %T Scaling the Indian Buffet Process via Submodular Maximization %A Colorado Reed %A Ghahramani Zoubin %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-reed13 %I PMLR %P 1013--1021 %U https://proceedings.mlr.press/v28/reed13.html %V 28 %N 3 %X Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Wellings (2008)’s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a 1/3-approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.
RIS
TY - CPAPER TI - Scaling the Indian Buffet Process via Submodular Maximization AU - Colorado Reed AU - Ghahramani Zoubin BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-reed13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1013 EP - 1021 L1 - http://proceedings.mlr.press/v28/reed13.pdf UR - https://proceedings.mlr.press/v28/reed13.html AB - Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Wellings (2008)’s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a 1/3-approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model. ER -
APA
Reed, C. & Zoubin, G.. (2013). Scaling the Indian Buffet Process via Submodular Maximization. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1013-1021 Available from https://proceedings.mlr.press/v28/reed13.html.

Related Material