Dataset Condensation via Efficient Synthetic-Data Parameterization

Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, Hyun Oh Song
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11102-11118, 2022.

Abstract

The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning. Recent studies on dataset condensation attempt to reduce the dependence on such massive data by synthesizing a compact training dataset. However, the existing approaches have fundamental limitations in optimization due to the limited representability of synthetic datasets without considering any data regularity characteristics. To this end, we propose a novel condensation framework that generates multiple synthetic data with a limited storage budget via efficient parameterization considering data regularity. We further analyze the shortcomings of the existing gradient matching-based condensation methods and develop an effective optimization technique for improving the condensation of training data information. We propose a unified algorithm that drastically improves the quality of condensed data against the current state-of-the-art on CIFAR-10, ImageNet, and Speech Commands.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kim22c, title = {Dataset Condensation via Efficient Synthetic-Data Parameterization}, author = {Kim, Jang-Hyun and Kim, Jinuk and Oh, Seong Joon and Yun, Sangdoo and Song, Hwanjun and Jeong, Joonhyun and Ha, Jung-Woo and Song, Hyun Oh}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11102--11118}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kim22c/kim22c.pdf}, url = {https://proceedings.mlr.press/v162/kim22c.html}, abstract = {The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning. Recent studies on dataset condensation attempt to reduce the dependence on such massive data by synthesizing a compact training dataset. However, the existing approaches have fundamental limitations in optimization due to the limited representability of synthetic datasets without considering any data regularity characteristics. To this end, we propose a novel condensation framework that generates multiple synthetic data with a limited storage budget via efficient parameterization considering data regularity. We further analyze the shortcomings of the existing gradient matching-based condensation methods and develop an effective optimization technique for improving the condensation of training data information. We propose a unified algorithm that drastically improves the quality of condensed data against the current state-of-the-art on CIFAR-10, ImageNet, and Speech Commands.} }
Endnote
%0 Conference Paper %T Dataset Condensation via Efficient Synthetic-Data Parameterization %A Jang-Hyun Kim %A Jinuk Kim %A Seong Joon Oh %A Sangdoo Yun %A Hwanjun Song %A Joonhyun Jeong %A Jung-Woo Ha %A Hyun Oh Song %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kim22c %I PMLR %P 11102--11118 %U https://proceedings.mlr.press/v162/kim22c.html %V 162 %X The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning. Recent studies on dataset condensation attempt to reduce the dependence on such massive data by synthesizing a compact training dataset. However, the existing approaches have fundamental limitations in optimization due to the limited representability of synthetic datasets without considering any data regularity characteristics. To this end, we propose a novel condensation framework that generates multiple synthetic data with a limited storage budget via efficient parameterization considering data regularity. We further analyze the shortcomings of the existing gradient matching-based condensation methods and develop an effective optimization technique for improving the condensation of training data information. We propose a unified algorithm that drastically improves the quality of condensed data against the current state-of-the-art on CIFAR-10, ImageNet, and Speech Commands.
APA
Kim, J., Kim, J., Oh, S.J., Yun, S., Song, H., Jeong, J., Ha, J. & Song, H.O.. (2022). Dataset Condensation via Efficient Synthetic-Data Parameterization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11102-11118 Available from https://proceedings.mlr.press/v162/kim22c.html.

Related Material