Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries

Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:22614-22630, 2023.

Abstract

Deep ensembles (DE) have been successful in improving model performance by learning diverse members via the stochasticity of random initialization. While recent works have attempted to promote further diversity in DE via hyperparameters or regularizing loss functions, these methods primarily still rely on a stochastic approach to explore the hypothesis space. In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters. We leverage recent advances in contrastive representation learning to create models that separately capture opposing hypotheses of invariant and equivariant functional classes and present a simple ensembling approach to efficiently combine appropriate hypotheses for a given task. We show that MSE effectively captures the multiplicity of conflicting hypotheses that is often required in large, diverse datasets like ImageNet. As a result of their inherent diversity, MSE improves classification performance, uncertainty quantification, and generalization across a series of transfer tasks. Our code is available at https://github.com/clott3/multi-sym-ensem

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-loh23a, title = {Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries}, author = {Loh, Charlotte and Han, Seungwook and Sudalairaj, Shivchander and Dangovski, Rumen and Xu, Kai and Wenzel, Florian and Soljacic, Marin and Srivastava, Akash}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {22614--22630}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/loh23a/loh23a.pdf}, url = {https://proceedings.mlr.press/v202/loh23a.html}, abstract = {Deep ensembles (DE) have been successful in improving model performance by learning diverse members via the stochasticity of random initialization. While recent works have attempted to promote further diversity in DE via hyperparameters or regularizing loss functions, these methods primarily still rely on a stochastic approach to explore the hypothesis space. In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters. We leverage recent advances in contrastive representation learning to create models that separately capture opposing hypotheses of invariant and equivariant functional classes and present a simple ensembling approach to efficiently combine appropriate hypotheses for a given task. We show that MSE effectively captures the multiplicity of conflicting hypotheses that is often required in large, diverse datasets like ImageNet. As a result of their inherent diversity, MSE improves classification performance, uncertainty quantification, and generalization across a series of transfer tasks. Our code is available at https://github.com/clott3/multi-sym-ensem} }
Endnote
%0 Conference Paper %T Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries %A Charlotte Loh %A Seungwook Han %A Shivchander Sudalairaj %A Rumen Dangovski %A Kai Xu %A Florian Wenzel %A Marin Soljacic %A Akash Srivastava %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-loh23a %I PMLR %P 22614--22630 %U https://proceedings.mlr.press/v202/loh23a.html %V 202 %X Deep ensembles (DE) have been successful in improving model performance by learning diverse members via the stochasticity of random initialization. While recent works have attempted to promote further diversity in DE via hyperparameters or regularizing loss functions, these methods primarily still rely on a stochastic approach to explore the hypothesis space. In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters. We leverage recent advances in contrastive representation learning to create models that separately capture opposing hypotheses of invariant and equivariant functional classes and present a simple ensembling approach to efficiently combine appropriate hypotheses for a given task. We show that MSE effectively captures the multiplicity of conflicting hypotheses that is often required in large, diverse datasets like ImageNet. As a result of their inherent diversity, MSE improves classification performance, uncertainty quantification, and generalization across a series of transfer tasks. Our code is available at https://github.com/clott3/multi-sym-ensem
APA
Loh, C., Han, S., Sudalairaj, S., Dangovski, R., Xu, K., Wenzel, F., Soljacic, M. & Srivastava, A.. (2023). Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:22614-22630 Available from https://proceedings.mlr.press/v202/loh23a.html.

Related Material