Topologically Densified Distributions

Christoph Hofer, Florian Graf, Marc Niethammer, Roland Kwitt
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4304-4313, 2020.

Abstract

We study regularization in the context of small sample-size learning with over-parametrized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we impose a topological constraint on samples drawn from the probability measure induced in that space. This provably leads to mass concentration effects around the representations of training instances, i.e., a property beneficial for generalization. By leveraging previous work to impose topological constrains in a neural network setting, we provide empirical evidence (across various vision benchmarks) to support our claim for better generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-hofer20a, title = {Topologically Densified Distributions}, author = {Hofer, Christoph and Graf, Florian and Niethammer, Marc and Kwitt, Roland}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4304--4313}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/hofer20a/hofer20a.pdf}, url = {https://proceedings.mlr.press/v119/hofer20a.html}, abstract = {We study regularization in the context of small sample-size learning with over-parametrized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we impose a topological constraint on samples drawn from the probability measure induced in that space. This provably leads to mass concentration effects around the representations of training instances, i.e., a property beneficial for generalization. By leveraging previous work to impose topological constrains in a neural network setting, we provide empirical evidence (across various vision benchmarks) to support our claim for better generalization.} }
Endnote
%0 Conference Paper %T Topologically Densified Distributions %A Christoph Hofer %A Florian Graf %A Marc Niethammer %A Roland Kwitt %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-hofer20a %I PMLR %P 4304--4313 %U https://proceedings.mlr.press/v119/hofer20a.html %V 119 %X We study regularization in the context of small sample-size learning with over-parametrized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we impose a topological constraint on samples drawn from the probability measure induced in that space. This provably leads to mass concentration effects around the representations of training instances, i.e., a property beneficial for generalization. By leveraging previous work to impose topological constrains in a neural network setting, we provide empirical evidence (across various vision benchmarks) to support our claim for better generalization.
APA
Hofer, C., Graf, F., Niethammer, M. & Kwitt, R.. (2020). Topologically Densified Distributions. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4304-4313 Available from https://proceedings.mlr.press/v119/hofer20a.html.

Related Material