PoF: Post-Training of Feature Extractor for Improving Generalization

Ikuro Sato, Yamada Ryota, Masayuki Tanaka, Nakamasa Inoue, Rei Kawakami
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:19221-19230, 2022.

Abstract

It has been intensively investigated that the local shape, especially flatness, of the loss landscape near a minimum plays an important role for generalization of deep models. We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum. The characteristics are two-fold: 1) Feature extractor is trained under parameter perturbations in the higher-layer parameter space, based on observations that suggest flattening higher-layer parameter space, and 2) the perturbation range is determined in a data-driven manner aiming to reduce a part of test loss caused by the positive loss curvature. We provide a theoretical analysis that shows the proposed algorithm implicitly reduces the target Hessian components as well as the loss. Experimental results show that PoF improved model performance against baseline methods on both CIFAR-10 and CIFAR-100 datasets for only 10-epoch post-training, and on SVHN dataset for 50-epoch post-training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-sato22a, title = {{P}o{F}: Post-Training of Feature Extractor for Improving Generalization}, author = {Sato, Ikuro and Ryota, Yamada and Tanaka, Masayuki and Inoue, Nakamasa and Kawakami, Rei}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {19221--19230}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/sato22a/sato22a.pdf}, url = {https://proceedings.mlr.press/v162/sato22a.html}, abstract = {It has been intensively investigated that the local shape, especially flatness, of the loss landscape near a minimum plays an important role for generalization of deep models. We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum. The characteristics are two-fold: 1) Feature extractor is trained under parameter perturbations in the higher-layer parameter space, based on observations that suggest flattening higher-layer parameter space, and 2) the perturbation range is determined in a data-driven manner aiming to reduce a part of test loss caused by the positive loss curvature. We provide a theoretical analysis that shows the proposed algorithm implicitly reduces the target Hessian components as well as the loss. Experimental results show that PoF improved model performance against baseline methods on both CIFAR-10 and CIFAR-100 datasets for only 10-epoch post-training, and on SVHN dataset for 50-epoch post-training.} }
Endnote
%0 Conference Paper %T PoF: Post-Training of Feature Extractor for Improving Generalization %A Ikuro Sato %A Yamada Ryota %A Masayuki Tanaka %A Nakamasa Inoue %A Rei Kawakami %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-sato22a %I PMLR %P 19221--19230 %U https://proceedings.mlr.press/v162/sato22a.html %V 162 %X It has been intensively investigated that the local shape, especially flatness, of the loss landscape near a minimum plays an important role for generalization of deep models. We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum. The characteristics are two-fold: 1) Feature extractor is trained under parameter perturbations in the higher-layer parameter space, based on observations that suggest flattening higher-layer parameter space, and 2) the perturbation range is determined in a data-driven manner aiming to reduce a part of test loss caused by the positive loss curvature. We provide a theoretical analysis that shows the proposed algorithm implicitly reduces the target Hessian components as well as the loss. Experimental results show that PoF improved model performance against baseline methods on both CIFAR-10 and CIFAR-100 datasets for only 10-epoch post-training, and on SVHN dataset for 50-epoch post-training.
APA
Sato, I., Ryota, Y., Tanaka, M., Inoue, N. & Kawakami, R.. (2022). PoF: Post-Training of Feature Extractor for Improving Generalization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:19221-19230 Available from https://proceedings.mlr.press/v162/sato22a.html.

Related Material