Neural Clustering Processes

Ari Pakman, Yueqi Wang, Catalin Mitelut, Jinhyung Lee, Liam Paninski
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7455-7465, 2020.

Abstract

Probabilistic clustering models (or equivalently, mixture models) are basic building blocks in countless statistical models and involve latent random variables over discrete spaces. For these models, posterior inference methods can be inaccurate and/or very slow. In this work we introduce deep network architectures trained with labeled samples from any generative model of clustered datasets. At test time, the networks generate approximate posterior samples of cluster labels for any new dataset of arbitrary size. We develop two complementary approaches to this task, requiring either O(N) or O(K) network forward passes per dataset, where N is the dataset size and K the number of clusters. Unlike previous approaches, our methods sample the labels of all the data points from a well-defined posterior, and can learn nonparametric Bayesian posteriors since they do not limit the number of mixture components. As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-pakman20a, title = {Neural Clustering Processes}, author = {Pakman, Ari and Wang, Yueqi and Mitelut, Catalin and Lee, Jinhyung and Paninski, Liam}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7455--7465}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/pakman20a/pakman20a.pdf}, url = {http://proceedings.mlr.press/v119/pakman20a.html}, abstract = {Probabilistic clustering models (or equivalently, mixture models) are basic building blocks in countless statistical models and involve latent random variables over discrete spaces. For these models, posterior inference methods can be inaccurate and/or very slow. In this work we introduce deep network architectures trained with labeled samples from any generative model of clustered datasets. At test time, the networks generate approximate posterior samples of cluster labels for any new dataset of arbitrary size. We develop two complementary approaches to this task, requiring either O(N) or O(K) network forward passes per dataset, where N is the dataset size and K the number of clusters. Unlike previous approaches, our methods sample the labels of all the data points from a well-defined posterior, and can learn nonparametric Bayesian posteriors since they do not limit the number of mixture components. As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays.} }
Endnote
%0 Conference Paper %T Neural Clustering Processes %A Ari Pakman %A Yueqi Wang %A Catalin Mitelut %A Jinhyung Lee %A Liam Paninski %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-pakman20a %I PMLR %P 7455--7465 %U http://proceedings.mlr.press/v119/pakman20a.html %V 119 %X Probabilistic clustering models (or equivalently, mixture models) are basic building blocks in countless statistical models and involve latent random variables over discrete spaces. For these models, posterior inference methods can be inaccurate and/or very slow. In this work we introduce deep network architectures trained with labeled samples from any generative model of clustered datasets. At test time, the networks generate approximate posterior samples of cluster labels for any new dataset of arbitrary size. We develop two complementary approaches to this task, requiring either O(N) or O(K) network forward passes per dataset, where N is the dataset size and K the number of clusters. Unlike previous approaches, our methods sample the labels of all the data points from a well-defined posterior, and can learn nonparametric Bayesian posteriors since they do not limit the number of mixture components. As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays.
APA
Pakman, A., Wang, Y., Mitelut, C., Lee, J. & Paninski, L.. (2020). Neural Clustering Processes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7455-7465 Available from http://proceedings.mlr.press/v119/pakman20a.html.

Related Material