NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned

Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih
Proceedings of the NeurIPS 2020 Competition and Demonstration Track, PMLR 133:86-111, 2021.

Abstract

We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v133-min21a, title = {NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned}, author = {Min, Sewon and Boyd-Graber, Jordan and Alberti, Chris and Chen, Danqi and Choi, Eunsol and Collins, Michael and Guu, Kelvin and Hajishirzi, Hannaneh and Lee, Kenton and Palomaki, Jennimaria and Raffel, Colin and Roberts, Adam and Kwiatkowski, Tom and Lewis, Patrick and Wu, Yuxiang and K\"uttler, Heinrich and Liu, Linqing and Minervini, Pasquale and Stenetorp, Pontus and Riedel, Sebastian and Yang, Sohee and Seo, Minjoon and Izacard, Gautier and Petroni, Fabio and Hosseini, Lucas and Cao, Nicola De and Grave, Edouard and Yamada, Ikuya and Shimaoka, Sonse and Suzuki, Masatoshi and Miyawaki, Shumpei and Sato, Shun and Takahashi, Ryo and Suzuki, Jun and Fajcik, Martin and Docekal, Martin and Ondrej, Karel and Smrz, Pavel and Cheng, Hao and Shen, Yelong and Liu, Xiaodong and He, Pengcheng and Chen, Weizhu and Gao, Jianfeng and Oguz, Barlas and Chen, Xilun and Karpukhin, Vladimir and Peshterliev, Stan and Okhonko, Dmytro and Schlichtkrull, Michael and Gupta, Sonal and Mehdad, Yashar and Yih, Wen-tau}, booktitle = {Proceedings of the NeurIPS 2020 Competition and Demonstration Track}, pages = {86--111}, year = {2021}, editor = {Escalante, Hugo Jair and Hofmann, Katja}, volume = {133}, series = {Proceedings of Machine Learning Research}, month = {06--12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v133/min21a/min21a.pdf}, url = {https://proceedings.mlr.press/v133/min21a.html}, abstract = {We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA. } }
Endnote
%0 Conference Paper %T NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned %A Sewon Min %A Jordan Boyd-Graber %A Chris Alberti %A Danqi Chen %A Eunsol Choi %A Michael Collins %A Kelvin Guu %A Hannaneh Hajishirzi %A Kenton Lee %A Jennimaria Palomaki %A Colin Raffel %A Adam Roberts %A Tom Kwiatkowski %A Patrick Lewis %A Yuxiang Wu %A Heinrich Küttler %A Linqing Liu %A Pasquale Minervini %A Pontus Stenetorp %A Sebastian Riedel %A Sohee Yang %A Minjoon Seo %A Gautier Izacard %A Fabio Petroni %A Lucas Hosseini %A Nicola De Cao %A Edouard Grave %A Ikuya Yamada %A Sonse Shimaoka %A Masatoshi Suzuki %A Shumpei Miyawaki %A Shun Sato %A Ryo Takahashi %A Jun Suzuki %A Martin Fajcik %A Martin Docekal %A Karel Ondrej %A Pavel Smrz %A Hao Cheng %A Yelong Shen %A Xiaodong Liu %A Pengcheng He %A Weizhu Chen %A Jianfeng Gao %A Barlas Oguz %A Xilun Chen %A Vladimir Karpukhin %A Stan Peshterliev %A Dmytro Okhonko %A Michael Schlichtkrull %A Sonal Gupta %A Yashar Mehdad %A Wen-tau Yih %B Proceedings of the NeurIPS 2020 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2021 %E Hugo Jair Escalante %E Katja Hofmann %F pmlr-v133-min21a %I PMLR %P 86--111 %U https://proceedings.mlr.press/v133/min21a.html %V 133 %X We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.
APA
Min, S., Boyd-Graber, J., Alberti, C., Chen, D., Choi, E., Collins, M., Guu, K., Hajishirzi, H., Lee, K., Palomaki, J., Raffel, C., Roberts, A., Kwiatkowski, T., Lewis, P., Wu, Y., Küttler, H., Liu, L., Minervini, P., Stenetorp, P., Riedel, S., Yang, S., Seo, M., Izacard, G., Petroni, F., Hosseini, L., Cao, N.D., Grave, E., Yamada, I., Shimaoka, S., Suzuki, M., Miyawaki, S., Sato, S., Takahashi, R., Suzuki, J., Fajcik, M., Docekal, M., Ondrej, K., Smrz, P., Cheng, H., Shen, Y., Liu, X., He, P., Chen, W., Gao, J., Oguz, B., Chen, X., Karpukhin, V., Peshterliev, S., Okhonko, D., Schlichtkrull, M., Gupta, S., Mehdad, Y. & Yih, W.. (2021). NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned. Proceedings of the NeurIPS 2020 Competition and Demonstration Track, in Proceedings of Machine Learning Research 133:86-111 Available from https://proceedings.mlr.press/v133/min21a.html.

Related Material