Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations

Amin Ghiasi, Hamid Kazemi, Steven Reich, Chen Zhu, Micah Goldblum, Tom Goldstein
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:7484-7512, 2022.

Abstract

Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ghiasi22a, title = {Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations}, author = {Ghiasi, Amin and Kazemi, Hamid and Reich, Steven and Zhu, Chen and Goldblum, Micah and Goldstein, Tom}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {7484--7512}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ghiasi22a/ghiasi22a.pdf}, url = {https://proceedings.mlr.press/v162/ghiasi22a.html}, abstract = {Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.} }
Endnote
%0 Conference Paper %T Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations %A Amin Ghiasi %A Hamid Kazemi %A Steven Reich %A Chen Zhu %A Micah Goldblum %A Tom Goldstein %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ghiasi22a %I PMLR %P 7484--7512 %U https://proceedings.mlr.press/v162/ghiasi22a.html %V 162 %X Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.
APA
Ghiasi, A., Kazemi, H., Reich, S., Zhu, C., Goldblum, M. & Goldstein, T.. (2022). Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:7484-7512 Available from https://proceedings.mlr.press/v162/ghiasi22a.html.

Related Material