Variable Skipping for Autoregressive Range Density Estimation

Eric Liang, Zongheng Yang, Ion Stoica, Pieter Abbeel, Yan Duan, Peter Chen
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6040-6049, 2020.

Abstract

Deep autoregressive models compute point likelihood estimates of individual data points. However, many applications (i.e., database cardinality estimation), require estimating range densities, a capability that is under-explored by current neural density estimation literature. In these applications, fast and accurate range density estimates over high-dimensional data directly impact user-perceived performance. In this paper, we explore a technique for accelerating range density estimation over deep autoregressive models. This technique, called variable skipping, exploits the sparse structure of range density queries to avoid sampling unnecessary variables during approximate inference. We show that variable skipping provides 10-100x efficiency improvements when targeting challenging high-quantile error metrics, enables complex applications such as text pattern matching, and can be realized via a simple data augmentation procedure without changing the usual maximum likelihood objective.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-liang20b, title = {Variable Skipping for Autoregressive Range Density Estimation}, author = {Liang, Eric and Yang, Zongheng and Stoica, Ion and Abbeel, Pieter and Duan, Yan and Chen, Peter}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6040--6049}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/liang20b/liang20b.pdf}, url = {https://proceedings.mlr.press/v119/liang20b.html}, abstract = {Deep autoregressive models compute point likelihood estimates of individual data points. However, many applications (i.e., database cardinality estimation), require estimating range densities, a capability that is under-explored by current neural density estimation literature. In these applications, fast and accurate range density estimates over high-dimensional data directly impact user-perceived performance. In this paper, we explore a technique for accelerating range density estimation over deep autoregressive models. This technique, called variable skipping, exploits the sparse structure of range density queries to avoid sampling unnecessary variables during approximate inference. We show that variable skipping provides 10-100x efficiency improvements when targeting challenging high-quantile error metrics, enables complex applications such as text pattern matching, and can be realized via a simple data augmentation procedure without changing the usual maximum likelihood objective.} }
Endnote
%0 Conference Paper %T Variable Skipping for Autoregressive Range Density Estimation %A Eric Liang %A Zongheng Yang %A Ion Stoica %A Pieter Abbeel %A Yan Duan %A Peter Chen %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-liang20b %I PMLR %P 6040--6049 %U https://proceedings.mlr.press/v119/liang20b.html %V 119 %X Deep autoregressive models compute point likelihood estimates of individual data points. However, many applications (i.e., database cardinality estimation), require estimating range densities, a capability that is under-explored by current neural density estimation literature. In these applications, fast and accurate range density estimates over high-dimensional data directly impact user-perceived performance. In this paper, we explore a technique for accelerating range density estimation over deep autoregressive models. This technique, called variable skipping, exploits the sparse structure of range density queries to avoid sampling unnecessary variables during approximate inference. We show that variable skipping provides 10-100x efficiency improvements when targeting challenging high-quantile error metrics, enables complex applications such as text pattern matching, and can be realized via a simple data augmentation procedure without changing the usual maximum likelihood objective.
APA
Liang, E., Yang, Z., Stoica, I., Abbeel, P., Duan, Y. & Chen, P.. (2020). Variable Skipping for Autoregressive Range Density Estimation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6040-6049 Available from https://proceedings.mlr.press/v119/liang20b.html.

Related Material