Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:863-873, 2021.

Abstract

Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-bhagoji21a, title = {Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries}, author = {Bhagoji, Arjun Nitin and Cullina, Daniel and Sehwag, Vikash and Mittal, Prateek}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {863--873}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bhagoji21a/bhagoji21a.pdf}, url = {https://proceedings.mlr.press/v139/bhagoji21a.html}, abstract = {Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.} }
Endnote
%0 Conference Paper %T Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries %A Arjun Nitin Bhagoji %A Daniel Cullina %A Vikash Sehwag %A Prateek Mittal %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bhagoji21a %I PMLR %P 863--873 %U https://proceedings.mlr.press/v139/bhagoji21a.html %V 139 %X Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.
APA
Bhagoji, A.N., Cullina, D., Sehwag, V. & Mittal, P.. (2021). Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:863-873 Available from https://proceedings.mlr.press/v139/bhagoji21a.html.

Related Material