Logit-based ensemble distribution distillation for robust autoregressive sequence uncertainties

Yassir Fathullah, Guoxuan Xia, Mark J. F. Gales
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:582-591, 2023.

Abstract

Efficiently and reliably estimating uncertainty is an important objective in deep learning. It is especially pertinent to autoregressive sequence tasks, where training and inference costs are typically very high. However, existing research has predominantly focused on tasks with static data such as image classification. In this work, we investigate Ensemble Distribution Distillation (EDD) applied to large-scale natural language sequence-to-sequence data. EDD aims to compress the superior uncertainty performance of an expensive (teacher) ensemble into a cheaper (student) single model. Importantly, the ability to separate knowledge (epistemic) and data (aleatoric) uncertainty is retained. Existing probability-space approaches to EDD, however, are difficult to scale to large vocabularies. We show, for modern transformer architectures on large-scale translation tasks, that modelling the ensemble logits, instead of softmax probabilities, leads to significantly better students. Moreover, the students surprisingly even outperform Deep Ensembles by up to $\sim$10% AUROC on out-of-distribution detection, whilst matching them at in-distribution translation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-fathullah23a, title = {Logit-based ensemble distribution distillation for robust autoregressive sequence uncertainties}, author = {Fathullah, Yassir and Xia, Guoxuan and J. F. Gales, Mark}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {582--591}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/fathullah23a/fathullah23a.pdf}, url = {https://proceedings.mlr.press/v216/fathullah23a.html}, abstract = {Efficiently and reliably estimating uncertainty is an important objective in deep learning. It is especially pertinent to autoregressive sequence tasks, where training and inference costs are typically very high. However, existing research has predominantly focused on tasks with static data such as image classification. In this work, we investigate Ensemble Distribution Distillation (EDD) applied to large-scale natural language sequence-to-sequence data. EDD aims to compress the superior uncertainty performance of an expensive (teacher) ensemble into a cheaper (student) single model. Importantly, the ability to separate knowledge (epistemic) and data (aleatoric) uncertainty is retained. Existing probability-space approaches to EDD, however, are difficult to scale to large vocabularies. We show, for modern transformer architectures on large-scale translation tasks, that modelling the ensemble logits, instead of softmax probabilities, leads to significantly better students. Moreover, the students surprisingly even outperform Deep Ensembles by up to $\sim$10% AUROC on out-of-distribution detection, whilst matching them at in-distribution translation.} }
Endnote
%0 Conference Paper %T Logit-based ensemble distribution distillation for robust autoregressive sequence uncertainties %A Yassir Fathullah %A Guoxuan Xia %A Mark J. F. Gales %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-fathullah23a %I PMLR %P 582--591 %U https://proceedings.mlr.press/v216/fathullah23a.html %V 216 %X Efficiently and reliably estimating uncertainty is an important objective in deep learning. It is especially pertinent to autoregressive sequence tasks, where training and inference costs are typically very high. However, existing research has predominantly focused on tasks with static data such as image classification. In this work, we investigate Ensemble Distribution Distillation (EDD) applied to large-scale natural language sequence-to-sequence data. EDD aims to compress the superior uncertainty performance of an expensive (teacher) ensemble into a cheaper (student) single model. Importantly, the ability to separate knowledge (epistemic) and data (aleatoric) uncertainty is retained. Existing probability-space approaches to EDD, however, are difficult to scale to large vocabularies. We show, for modern transformer architectures on large-scale translation tasks, that modelling the ensemble logits, instead of softmax probabilities, leads to significantly better students. Moreover, the students surprisingly even outperform Deep Ensembles by up to $\sim$10% AUROC on out-of-distribution detection, whilst matching them at in-distribution translation.
APA
Fathullah, Y., Xia, G. & J. F. Gales, M.. (2023). Logit-based ensemble distribution distillation for robust autoregressive sequence uncertainties. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:582-591 Available from https://proceedings.mlr.press/v216/fathullah23a.html.

Related Material