General Staircase Mechanisms for Optimal Differential Privacy

Alex Kulesza, Ananda Theertha Suresh, Yuyan Wang
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4564-4572, 2025.

Abstract

We derive the optimal differentially private additive noise mechanism for queries in $\mathbb{R}^d$ when sensitivity and error are defined by an arbitrary norm $||\cdot||_K$. The optimal mechanism is a generalization of the staircase mechanism, which is known to be optimal under the $\ell_1$ norm when $d \leq 2$; we extend the mechanism and its guarantee to arbitrary norms and dimensions, proving a conjecture of Geng et al. [2015] along the way. The generalized staircase mechanism we derive can be viewed as a refinement of the $K$-norm mechanism of Hardt and Talwar [2010] , with improvements particularly evident in the low-privacy regime as $\epsilon \to \infty$. We show how to implement the generalized staircase mechanism efficiently, given an efficient algorithm for sampling the unit $K$-norm ball, and demonstrate that it significantly reduces error in realistic settings, including under non-standard norms that are common in practice, and across a range of error metrics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kulesza25a, title = {General Staircase Mechanisms for Optimal Differential Privacy}, author = {Kulesza, Alex and Suresh, Ananda Theertha and Wang, Yuyan}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4564--4572}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kulesza25a/kulesza25a.pdf}, url = {https://proceedings.mlr.press/v258/kulesza25a.html}, abstract = {We derive the optimal differentially private additive noise mechanism for queries in $\mathbb{R}^d$ when sensitivity and error are defined by an arbitrary norm $||\cdot||_K$. The optimal mechanism is a generalization of the staircase mechanism, which is known to be optimal under the $\ell_1$ norm when $d \leq 2$; we extend the mechanism and its guarantee to arbitrary norms and dimensions, proving a conjecture of Geng et al. [2015] along the way. The generalized staircase mechanism we derive can be viewed as a refinement of the $K$-norm mechanism of Hardt and Talwar [2010] , with improvements particularly evident in the low-privacy regime as $\epsilon \to \infty$. We show how to implement the generalized staircase mechanism efficiently, given an efficient algorithm for sampling the unit $K$-norm ball, and demonstrate that it significantly reduces error in realistic settings, including under non-standard norms that are common in practice, and across a range of error metrics.} }
Endnote
%0 Conference Paper %T General Staircase Mechanisms for Optimal Differential Privacy %A Alex Kulesza %A Ananda Theertha Suresh %A Yuyan Wang %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kulesza25a %I PMLR %P 4564--4572 %U https://proceedings.mlr.press/v258/kulesza25a.html %V 258 %X We derive the optimal differentially private additive noise mechanism for queries in $\mathbb{R}^d$ when sensitivity and error are defined by an arbitrary norm $||\cdot||_K$. The optimal mechanism is a generalization of the staircase mechanism, which is known to be optimal under the $\ell_1$ norm when $d \leq 2$; we extend the mechanism and its guarantee to arbitrary norms and dimensions, proving a conjecture of Geng et al. [2015] along the way. The generalized staircase mechanism we derive can be viewed as a refinement of the $K$-norm mechanism of Hardt and Talwar [2010] , with improvements particularly evident in the low-privacy regime as $\epsilon \to \infty$. We show how to implement the generalized staircase mechanism efficiently, given an efficient algorithm for sampling the unit $K$-norm ball, and demonstrate that it significantly reduces error in realistic settings, including under non-standard norms that are common in practice, and across a range of error metrics.
APA
Kulesza, A., Suresh, A.T. & Wang, Y.. (2025). General Staircase Mechanisms for Optimal Differential Privacy. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4564-4572 Available from https://proceedings.mlr.press/v258/kulesza25a.html.

Related Material