Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?

Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Katy Ilonka Gero, Alex Pentland, Jad Kabbara
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32711-32725, 2024.

Abstract

New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in tracing authenticity, verifying consent, preserving privacy, addressing representation and bias, respecting copyright, and overall developing ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models’ limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-longpre24b, title = {Position: Data Authenticity, Consent, & Provenance for {AI} are all broken: what will it take to fix them?}, author = {Longpre, Shayne and Mahari, Robert and Obeng-Marnu, Naana and Brannon, William and South, Tobin and Gero, Katy Ilonka and Pentland, Alex and Kabbara, Jad}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32711--32725}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/longpre24b/longpre24b.pdf}, url = {https://proceedings.mlr.press/v235/longpre24b.html}, abstract = {New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in tracing authenticity, verifying consent, preserving privacy, addressing representation and bias, respecting copyright, and overall developing ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models’ limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.} }
Endnote
%0 Conference Paper %T Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them? %A Shayne Longpre %A Robert Mahari %A Naana Obeng-Marnu %A William Brannon %A Tobin South %A Katy Ilonka Gero %A Alex Pentland %A Jad Kabbara %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-longpre24b %I PMLR %P 32711--32725 %U https://proceedings.mlr.press/v235/longpre24b.html %V 235 %X New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in tracing authenticity, verifying consent, preserving privacy, addressing representation and bias, respecting copyright, and overall developing ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models’ limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.
APA
Longpre, S., Mahari, R., Obeng-Marnu, N., Brannon, W., South, T., Gero, K.I., Pentland, A. & Kabbara, J.. (2024). Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32711-32725 Available from https://proceedings.mlr.press/v235/longpre24b.html.

Related Material