Generalised Lipschitz Regularisation Equals Distributional Robustness

Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2178-2188, 2021.

Abstract

The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. The theory allows us to characterise the conditions under which the distributional robustness equals a Lipschitz-regularised model, and to tightly quantify, for the first time, the slackness under very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-cranko21a, title = {Generalised Lipschitz Regularisation Equals Distributional Robustness}, author = {Cranko, Zac and Shi, Zhan and Zhang, Xinhua and Nock, Richard and Kornblith, Simon}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2178--2188}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/cranko21a/cranko21a.pdf}, url = {https://proceedings.mlr.press/v139/cranko21a.html}, abstract = {The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. The theory allows us to characterise the conditions under which the distributional robustness equals a Lipschitz-regularised model, and to tightly quantify, for the first time, the slackness under very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.} }
Endnote
%0 Conference Paper %T Generalised Lipschitz Regularisation Equals Distributional Robustness %A Zac Cranko %A Zhan Shi %A Xinhua Zhang %A Richard Nock %A Simon Kornblith %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-cranko21a %I PMLR %P 2178--2188 %U https://proceedings.mlr.press/v139/cranko21a.html %V 139 %X The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators. In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. The theory allows us to characterise the conditions under which the distributional robustness equals a Lipschitz-regularised model, and to tightly quantify, for the first time, the slackness under very mild assumptions. As a theoretical application we show a new result explicating the connection between adversarial learning and distributional robustness. We then give new results for how to achieve Lipschitz regularisation of kernel classifiers, which are demonstrated experimentally.
APA
Cranko, Z., Shi, Z., Zhang, X., Nock, R. & Kornblith, S.. (2021). Generalised Lipschitz Regularisation Equals Distributional Robustness. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2178-2188 Available from https://proceedings.mlr.press/v139/cranko21a.html.

Related Material