Weighted Tensor Decomposition for Learning Latent Variables with Partial Data

Omer Gottesman, Weiwei Pan, Finale Doshi-Velez
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1664-1672, 2018.

Abstract

Tensor decomposition methods are popular tools for learning latent variables given only lowerorder moments of the data. However, the standard assumption is that we have sufficient data to estimate these moments to high accuracy. In this work, we consider the case in which certain dimensions of the data are not always observed–common in applied settings, where not all measurements may be taken for all observations–resulting in moment estimates of varying quality. We derive a weighted tensor decomposition approach that is computationally as efficient as the non-weighted approach, and demonstrate that it outperforms methods that do not appropriately leverage these less-observed dimensions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-gottesman18a, title = {Weighted Tensor Decomposition for Learning Latent Variables with Partial Data}, author = {Gottesman, Omer and Pan, Weiwei and Doshi-Velez, Finale}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1664--1672}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/gottesman18a/gottesman18a.pdf}, url = {https://proceedings.mlr.press/v84/gottesman18a.html}, abstract = {Tensor decomposition methods are popular tools for learning latent variables given only lowerorder moments of the data. However, the standard assumption is that we have sufficient data to estimate these moments to high accuracy. In this work, we consider the case in which certain dimensions of the data are not always observed–common in applied settings, where not all measurements may be taken for all observations–resulting in moment estimates of varying quality. We derive a weighted tensor decomposition approach that is computationally as efficient as the non-weighted approach, and demonstrate that it outperforms methods that do not appropriately leverage these less-observed dimensions.} }
Endnote
%0 Conference Paper %T Weighted Tensor Decomposition for Learning Latent Variables with Partial Data %A Omer Gottesman %A Weiwei Pan %A Finale Doshi-Velez %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-gottesman18a %I PMLR %P 1664--1672 %U https://proceedings.mlr.press/v84/gottesman18a.html %V 84 %X Tensor decomposition methods are popular tools for learning latent variables given only lowerorder moments of the data. However, the standard assumption is that we have sufficient data to estimate these moments to high accuracy. In this work, we consider the case in which certain dimensions of the data are not always observed–common in applied settings, where not all measurements may be taken for all observations–resulting in moment estimates of varying quality. We derive a weighted tensor decomposition approach that is computationally as efficient as the non-weighted approach, and demonstrate that it outperforms methods that do not appropriately leverage these less-observed dimensions.
APA
Gottesman, O., Pan, W. & Doshi-Velez, F.. (2018). Weighted Tensor Decomposition for Learning Latent Variables with Partial Data. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1664-1672 Available from https://proceedings.mlr.press/v84/gottesman18a.html.

Related Material