Stealing part of a production language model

Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:5680-5705, 2024.

Abstract

We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s Ada and Babbage language models. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the GPT-3.5-turbo model, and estimate it would cost under \$2,000 in queries to recover the entire projection matrix. We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-carlini24a, title = {Stealing part of a production language model}, author = {Carlini, Nicholas and Paleka, Daniel and Dvijotham, Krishnamurthy Dj and Steinke, Thomas and Hayase, Jonathan and Cooper, A. Feder and Lee, Katherine and Jagielski, Matthew and Nasr, Milad and Conmy, Arthur and Wallace, Eric and Rolnick, David and Tram\`{e}r, Florian}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {5680--5705}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/carlini24a/carlini24a.pdf}, url = {https://proceedings.mlr.press/v235/carlini24a.html}, abstract = {We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s Ada and Babbage language models. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the GPT-3.5-turbo model, and estimate it would cost under \$2,000 in queries to recover the entire projection matrix. We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.} }
Endnote
%0 Conference Paper %T Stealing part of a production language model %A Nicholas Carlini %A Daniel Paleka %A Krishnamurthy Dj Dvijotham %A Thomas Steinke %A Jonathan Hayase %A A. Feder Cooper %A Katherine Lee %A Matthew Jagielski %A Milad Nasr %A Arthur Conmy %A Eric Wallace %A David Rolnick %A Florian Tramèr %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-carlini24a %I PMLR %P 5680--5705 %U https://proceedings.mlr.press/v235/carlini24a.html %V 235 %X We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s Ada and Babbage language models. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the GPT-3.5-turbo model, and estimate it would cost under \$2,000 in queries to recover the entire projection matrix. We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.
APA
Carlini, N., Paleka, D., Dvijotham, K.D., Steinke, T., Hayase, J., Cooper, A.F., Lee, K., Jagielski, M., Nasr, M., Conmy, A., Wallace, E., Rolnick, D. & Tramèr, F.. (2024). Stealing part of a production language model. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:5680-5705 Available from https://proceedings.mlr.press/v235/carlini24a.html.

Related Material