Combining Diverse Feature Priors

Saachi Jain, Dimitris Tsipras, Aleksander Madry
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:9802-9832, 2022.

Abstract

To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of explicit feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-jain22b, title = {Combining Diverse Feature Priors}, author = {Jain, Saachi and Tsipras, Dimitris and Madry, Aleksander}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {9802--9832}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/jain22b/jain22b.pdf}, url = {https://proceedings.mlr.press/v162/jain22b.html}, abstract = {To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of explicit feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations.} }
Endnote
%0 Conference Paper %T Combining Diverse Feature Priors %A Saachi Jain %A Dimitris Tsipras %A Aleksander Madry %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-jain22b %I PMLR %P 9802--9832 %U https://proceedings.mlr.press/v162/jain22b.html %V 162 %X To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of explicit feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations.
APA
Jain, S., Tsipras, D. & Madry, A.. (2022). Combining Diverse Feature Priors. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:9802-9832 Available from https://proceedings.mlr.press/v162/jain22b.html.

Related Material