Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI

Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder De Witt, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Botos Csaba, Fabro Steibel, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Marvin Imperial, Juan A. Nolazco-Flores, Lori Landay, Matthew Thomas Jackson, Paul Rottger, Philip Torr, Trevor Darrell, Yong Suk Lee, Jakob Nicolaus Foerster
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:12348-12370, 2024.

Abstract

In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. While regulation is important, it is key that it does not put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-eiras24b, title = {Position: Near to Mid-term Risks and Opportunities of Open-Source Generative {AI}}, author = {Eiras, Francisco and Petrov, Aleksandar and Vidgen, Bertie and Schroeder De Witt, Christian and Pizzati, Fabio and Elkins, Katherine and Mukhopadhyay, Supratik and Bibi, Adel and Csaba, Botos and Steibel, Fabro and Barez, Fazl and Smith, Genevieve and Guadagni, Gianluca and Chun, Jon and Cabot, Jordi and Imperial, Joseph Marvin and Nolazco-Flores, Juan A. and Landay, Lori and Jackson, Matthew Thomas and Rottger, Paul and Torr, Philip and Darrell, Trevor and Lee, Yong Suk and Foerster, Jakob Nicolaus}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {12348--12370}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/eiras24b/eiras24b.pdf}, url = {https://proceedings.mlr.press/v235/eiras24b.html}, abstract = {In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. While regulation is important, it is key that it does not put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.} }
Endnote
%0 Conference Paper %T Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI %A Francisco Eiras %A Aleksandar Petrov %A Bertie Vidgen %A Christian Schroeder De Witt %A Fabio Pizzati %A Katherine Elkins %A Supratik Mukhopadhyay %A Adel Bibi %A Botos Csaba %A Fabro Steibel %A Fazl Barez %A Genevieve Smith %A Gianluca Guadagni %A Jon Chun %A Jordi Cabot %A Joseph Marvin Imperial %A Juan A. Nolazco-Flores %A Lori Landay %A Matthew Thomas Jackson %A Paul Rottger %A Philip Torr %A Trevor Darrell %A Yong Suk Lee %A Jakob Nicolaus Foerster %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-eiras24b %I PMLR %P 12348--12370 %U https://proceedings.mlr.press/v235/eiras24b.html %V 235 %X In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. While regulation is important, it is key that it does not put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.
APA
Eiras, F., Petrov, A., Vidgen, B., Schroeder De Witt, C., Pizzati, F., Elkins, K., Mukhopadhyay, S., Bibi, A., Csaba, B., Steibel, F., Barez, F., Smith, G., Guadagni, G., Chun, J., Cabot, J., Imperial, J.M., Nolazco-Flores, J.A., Landay, L., Jackson, M.T., Rottger, P., Torr, P., Darrell, T., Lee, Y.S. & Foerster, J.N.. (2024). Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:12348-12370 Available from https://proceedings.mlr.press/v235/eiras24b.html.

Related Material