Universal Multi-Party Poisoning Attacks

Saeed Mahloujifar, Mohammad Mahmoody, Ameer Mohammed
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4274-4283, 2019.

Abstract

In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study $(k,p)$-poisoning attacks in which an adversary controls $k\in[m]$ of the parties, and for each corrupted party $P_i$, the adversary submits some poisoned data $T’_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T’_i$ is still honestly generated).We prove that for any "bad" property $B$ of the final trained hypothesis $h$ (e.g., $h$ failing on a particular test example or having "large" risk) that has an arbitrarily small constant probability of happening without the attack, there always is a $(k,p)$-poisoning attack that increases the probability of $B$ from $\mu$ to by $\mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m)$. Our attack only uses clean labels, and it is online, as it only knows the the data shared so far.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-mahloujifar19a, title = {Universal Multi-Party Poisoning Attacks}, author = {Mahloujifar, Saeed and Mahmoody, Mohammad and Mohammed, Ameer}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4274--4283}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/mahloujifar19a/mahloujifar19a.pdf}, url = {https://proceedings.mlr.press/v97/mahloujifar19a.html}, abstract = {In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study $(k,p)$-poisoning attacks in which an adversary controls $k\in[m]$ of the parties, and for each corrupted party $P_i$, the adversary submits some poisoned data $T’_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T’_i$ is still honestly generated).We prove that for any "bad" property $B$ of the final trained hypothesis $h$ (e.g., $h$ failing on a particular test example or having "large" risk) that has an arbitrarily small constant probability of happening without the attack, there always is a $(k,p)$-poisoning attack that increases the probability of $B$ from $\mu$ to by $\mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m)$. Our attack only uses clean labels, and it is online, as it only knows the the data shared so far.} }
Endnote
%0 Conference Paper %T Universal Multi-Party Poisoning Attacks %A Saeed Mahloujifar %A Mohammad Mahmoody %A Ameer Mohammed %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-mahloujifar19a %I PMLR %P 4274--4283 %U https://proceedings.mlr.press/v97/mahloujifar19a.html %V 97 %X In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study $(k,p)$-poisoning attacks in which an adversary controls $k\in[m]$ of the parties, and for each corrupted party $P_i$, the adversary submits some poisoned data $T’_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T’_i$ is still honestly generated).We prove that for any "bad" property $B$ of the final trained hypothesis $h$ (e.g., $h$ failing on a particular test example or having "large" risk) that has an arbitrarily small constant probability of happening without the attack, there always is a $(k,p)$-poisoning attack that increases the probability of $B$ from $\mu$ to by $\mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m)$. Our attack only uses clean labels, and it is online, as it only knows the the data shared so far.
APA
Mahloujifar, S., Mahmoody, M. & Mohammed, A.. (2019). Universal Multi-Party Poisoning Attacks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4274-4283 Available from https://proceedings.mlr.press/v97/mahloujifar19a.html.

Related Material