Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2356-2365, 2020.

Abstract

We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values. This setting includes the recently proposed deep probabilistic autoregressive forecasting models that estimate the probability distribution of a time series given its past and achieve state-of-the-art results in a diverse set of application domains. The key technical challenge we address is how to effectively differentiate through the Monte-Carlo estimation of statistics of the output sequence joint distribution. Additionally, we extend prior work on probabilistic forecasting to the Bayesian setting which allows conditioning on future observations, instead of only on past observations. We demonstrate that our approach can successfully generate attacks with small input perturbations in two challenging tasks where robust decision making is crucial – stock market trading and prediction of electricity consumption.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-dang-nhu20a, title = {Adversarial Attacks on Probabilistic Autoregressive Forecasting Models}, author = {Dang-Nhu, Rapha{\"e}l and Singh, Gagandeep and Bielik, Pavol and Vechev, Martin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2356--2365}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/dang-nhu20a/dang-nhu20a.pdf}, url = { http://proceedings.mlr.press/v119/dang-nhu20a.html }, abstract = {We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values. This setting includes the recently proposed deep probabilistic autoregressive forecasting models that estimate the probability distribution of a time series given its past and achieve state-of-the-art results in a diverse set of application domains. The key technical challenge we address is how to effectively differentiate through the Monte-Carlo estimation of statistics of the output sequence joint distribution. Additionally, we extend prior work on probabilistic forecasting to the Bayesian setting which allows conditioning on future observations, instead of only on past observations. We demonstrate that our approach can successfully generate attacks with small input perturbations in two challenging tasks where robust decision making is crucial – stock market trading and prediction of electricity consumption.} }
Endnote
%0 Conference Paper %T Adversarial Attacks on Probabilistic Autoregressive Forecasting Models %A Raphaël Dang-Nhu %A Gagandeep Singh %A Pavol Bielik %A Martin Vechev %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-dang-nhu20a %I PMLR %P 2356--2365 %U http://proceedings.mlr.press/v119/dang-nhu20a.html %V 119 %X We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values. This setting includes the recently proposed deep probabilistic autoregressive forecasting models that estimate the probability distribution of a time series given its past and achieve state-of-the-art results in a diverse set of application domains. The key technical challenge we address is how to effectively differentiate through the Monte-Carlo estimation of statistics of the output sequence joint distribution. Additionally, we extend prior work on probabilistic forecasting to the Bayesian setting which allows conditioning on future observations, instead of only on past observations. We demonstrate that our approach can successfully generate attacks with small input perturbations in two challenging tasks where robust decision making is crucial – stock market trading and prediction of electricity consumption.
APA
Dang-Nhu, R., Singh, G., Bielik, P. & Vechev, M.. (2020). Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2356-2365 Available from http://proceedings.mlr.press/v119/dang-nhu20a.html .

Related Material