How Good is a Single Basin?

Kai Lion, Lorenzo Noci, Thomas Hofmann, Gregor Bachmann
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4015-4023, 2024.

Abstract

The multi-modal nature of neural loss landscapes is often considered to be the main driver behind the empirical success of deep ensembles. In this work, we probe this belief by constructing various "connected" ensembles which are restricted to lie in the same basin. Through our experiments, we demonstrate that increased connectivity indeed negatively impacts performance. However, when incorporating the knowledge from other basins implicitly through distillation, we show that the gap in performance can be mitigated by re-discovering (multi-basin) deep ensembles within a single basin. Thus, we conjecture that while the extra-basin knowledge is at least partially present in any given basin, it cannot be easily harnessed without learning it from other basins.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-lion24a, title = { How Good is a Single Basin? }, author = {Lion, Kai and Noci, Lorenzo and Hofmann, Thomas and Bachmann, Gregor}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4015--4023}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/lion24a/lion24a.pdf}, url = {https://proceedings.mlr.press/v238/lion24a.html}, abstract = { The multi-modal nature of neural loss landscapes is often considered to be the main driver behind the empirical success of deep ensembles. In this work, we probe this belief by constructing various "connected" ensembles which are restricted to lie in the same basin. Through our experiments, we demonstrate that increased connectivity indeed negatively impacts performance. However, when incorporating the knowledge from other basins implicitly through distillation, we show that the gap in performance can be mitigated by re-discovering (multi-basin) deep ensembles within a single basin. Thus, we conjecture that while the extra-basin knowledge is at least partially present in any given basin, it cannot be easily harnessed without learning it from other basins. } }
Endnote
%0 Conference Paper %T How Good is a Single Basin? %A Kai Lion %A Lorenzo Noci %A Thomas Hofmann %A Gregor Bachmann %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-lion24a %I PMLR %P 4015--4023 %U https://proceedings.mlr.press/v238/lion24a.html %V 238 %X The multi-modal nature of neural loss landscapes is often considered to be the main driver behind the empirical success of deep ensembles. In this work, we probe this belief by constructing various "connected" ensembles which are restricted to lie in the same basin. Through our experiments, we demonstrate that increased connectivity indeed negatively impacts performance. However, when incorporating the knowledge from other basins implicitly through distillation, we show that the gap in performance can be mitigated by re-discovering (multi-basin) deep ensembles within a single basin. Thus, we conjecture that while the extra-basin knowledge is at least partially present in any given basin, it cannot be easily harnessed without learning it from other basins.
APA
Lion, K., Noci, L., Hofmann, T. & Bachmann, G.. (2024). How Good is a Single Basin? . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4015-4023 Available from https://proceedings.mlr.press/v238/lion24a.html.

Related Material