MEDUSA: Medical Data Under Shadow Attacks via Hybrid Model Inversion

Asfandyar Azhar, Paul Thielen, Curtis Langlotz
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1378-1386, 2025.

Abstract

We introduce MEDUSA (Medical Data Under Shadow Attacks), a novel hybrid model inversion framework that leverages gradient-based optimization and TCNNs to reconstruct high-fidelity medical images from model outputs in a gray-box setting. Unlike traditional attacks requiring full model details, MEDUSA uses surrogate shadow models trained on publicly available data, simulating limited-information scenarios often encountered in practice. Our approach shows that even with restricted access, quality image reconstructions are possible, raising serious privacy concerns for patient data. Contributions include demonstrating that a combination of gradient-based methods and TCNNs yields potent reconstructions, even with limited model access, and providing a detailed analysis of how different input configurations impact reconstruction quality. We also evaluate the reconstructions as viable training data, finding that they can approximate real images well enough to use for model training. Finally, we propose robust defensive mechanisms such as output vector truncation, Gaussian noise, and a new k-NN smearing technique to tackle privacy risks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-azhar25a, title = {MEDUSA: Medical Data Under Shadow Attacks via Hybrid Model Inversion}, author = {Azhar, Asfandyar and Thielen, Paul and Langlotz, Curtis}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1378--1386}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/azhar25a/azhar25a.pdf}, url = {https://proceedings.mlr.press/v258/azhar25a.html}, abstract = {We introduce MEDUSA (Medical Data Under Shadow Attacks), a novel hybrid model inversion framework that leverages gradient-based optimization and TCNNs to reconstruct high-fidelity medical images from model outputs in a gray-box setting. Unlike traditional attacks requiring full model details, MEDUSA uses surrogate shadow models trained on publicly available data, simulating limited-information scenarios often encountered in practice. Our approach shows that even with restricted access, quality image reconstructions are possible, raising serious privacy concerns for patient data. Contributions include demonstrating that a combination of gradient-based methods and TCNNs yields potent reconstructions, even with limited model access, and providing a detailed analysis of how different input configurations impact reconstruction quality. We also evaluate the reconstructions as viable training data, finding that they can approximate real images well enough to use for model training. Finally, we propose robust defensive mechanisms such as output vector truncation, Gaussian noise, and a new k-NN smearing technique to tackle privacy risks.} }
Endnote
%0 Conference Paper %T MEDUSA: Medical Data Under Shadow Attacks via Hybrid Model Inversion %A Asfandyar Azhar %A Paul Thielen %A Curtis Langlotz %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-azhar25a %I PMLR %P 1378--1386 %U https://proceedings.mlr.press/v258/azhar25a.html %V 258 %X We introduce MEDUSA (Medical Data Under Shadow Attacks), a novel hybrid model inversion framework that leverages gradient-based optimization and TCNNs to reconstruct high-fidelity medical images from model outputs in a gray-box setting. Unlike traditional attacks requiring full model details, MEDUSA uses surrogate shadow models trained on publicly available data, simulating limited-information scenarios often encountered in practice. Our approach shows that even with restricted access, quality image reconstructions are possible, raising serious privacy concerns for patient data. Contributions include demonstrating that a combination of gradient-based methods and TCNNs yields potent reconstructions, even with limited model access, and providing a detailed analysis of how different input configurations impact reconstruction quality. We also evaluate the reconstructions as viable training data, finding that they can approximate real images well enough to use for model training. Finally, we propose robust defensive mechanisms such as output vector truncation, Gaussian noise, and a new k-NN smearing technique to tackle privacy risks.
APA
Azhar, A., Thielen, P. & Langlotz, C.. (2025). MEDUSA: Medical Data Under Shadow Attacks via Hybrid Model Inversion. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1378-1386 Available from https://proceedings.mlr.press/v258/azhar25a.html.

Related Material