Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning

Yiding Jiang, Parth Natekar, Manik Sharma, Sumukh K. Aithal, Dhruva Kashyap, Natarajan Subramanyam, Carlos Lassance, Daniel M. Roy, Gintare Karolina Dziugaite, Suriya Gunasekar, Isabelle Guyon, Pierre Foret, Scott Yak, Hossein Mobahi, Behnam Neyshabur, Samy Bengio
Proceedings of the NeurIPS 2020 Competition and Demonstration Track, PMLR 133:170-190, 2021.

Abstract

Deep learning has been recently successfully applied to an ever larger number of problems, ranging from pattern recognition to complex decision making. However, several concerns have been raised, including guarantees of good generalization, which is of foremost importance. Despite numerous attempts, conventional statistical learning approaches fall short of providing a satisfactory explanation on why deep learning works. In a competition hosted at the Thirty-Fourth Conference on Neural Information Processing Systems (NeurIPS 2020), we invited the community to design robust and general complexity measures that can accurately predict the generalization of models. In this paper, we describe the competition design, the protocols, and the solutions of the top-three teams at the competition in details. In addition, we discuss the outcomes, common failure modes, and potential future directions for the competition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v133-jiang21a, title = {Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning}, author = {Jiang, Yiding and Natekar, Parth and Sharma, Manik and Aithal, Sumukh K. and Kashyap, Dhruva and Subramanyam, Natarajan and Lassance, Carlos and Roy, Daniel M. and Dziugaite, Gintare Karolina and Gunasekar, Suriya and Guyon, Isabelle and Foret, Pierre and Yak, Scott and Mobahi, Hossein and Neyshabur, Behnam and Bengio, Samy}, booktitle = {Proceedings of the NeurIPS 2020 Competition and Demonstration Track}, pages = {170--190}, year = {2021}, editor = {Escalante, Hugo Jair and Hofmann, Katja}, volume = {133}, series = {Proceedings of Machine Learning Research}, month = {06--12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v133/jiang21a/jiang21a.pdf}, url = {https://proceedings.mlr.press/v133/jiang21a.html}, abstract = {Deep learning has been recently successfully applied to an ever larger number of problems, ranging from pattern recognition to complex decision making. However, several concerns have been raised, including guarantees of good generalization, which is of foremost importance. Despite numerous attempts, conventional statistical learning approaches fall short of providing a satisfactory explanation on why deep learning works. In a competition hosted at the Thirty-Fourth Conference on Neural Information Processing Systems (NeurIPS 2020), we invited the community to design robust and general complexity measures that can accurately predict the generalization of models. In this paper, we describe the competition design, the protocols, and the solutions of the top-three teams at the competition in details. In addition, we discuss the outcomes, common failure modes, and potential future directions for the competition.} }
Endnote
%0 Conference Paper %T Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning %A Yiding Jiang %A Parth Natekar %A Manik Sharma %A Sumukh K. Aithal %A Dhruva Kashyap %A Natarajan Subramanyam %A Carlos Lassance %A Daniel M. Roy %A Gintare Karolina Dziugaite %A Suriya Gunasekar %A Isabelle Guyon %A Pierre Foret %A Scott Yak %A Hossein Mobahi %A Behnam Neyshabur %A Samy Bengio %B Proceedings of the NeurIPS 2020 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2021 %E Hugo Jair Escalante %E Katja Hofmann %F pmlr-v133-jiang21a %I PMLR %P 170--190 %U https://proceedings.mlr.press/v133/jiang21a.html %V 133 %X Deep learning has been recently successfully applied to an ever larger number of problems, ranging from pattern recognition to complex decision making. However, several concerns have been raised, including guarantees of good generalization, which is of foremost importance. Despite numerous attempts, conventional statistical learning approaches fall short of providing a satisfactory explanation on why deep learning works. In a competition hosted at the Thirty-Fourth Conference on Neural Information Processing Systems (NeurIPS 2020), we invited the community to design robust and general complexity measures that can accurately predict the generalization of models. In this paper, we describe the competition design, the protocols, and the solutions of the top-three teams at the competition in details. In addition, we discuss the outcomes, common failure modes, and potential future directions for the competition.
APA
Jiang, Y., Natekar, P., Sharma, M., Aithal, S.K., Kashyap, D., Subramanyam, N., Lassance, C., Roy, D.M., Dziugaite, G.K., Gunasekar, S., Guyon, I., Foret, P., Yak, S., Mobahi, H., Neyshabur, B. & Bengio, S.. (2021). Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning. Proceedings of the NeurIPS 2020 Competition and Demonstration Track, in Proceedings of Machine Learning Research 133:170-190 Available from https://proceedings.mlr.press/v133/jiang21a.html.

Related Material