Mixed Messages? The Limits of Automated Social Media Content Analysis

Natasha Duarte, Emma Llanso, Anna Loup
Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:106-106, 2018.

Abstract

Governments and companies are turning to automated tools to make sense of what people post on social media. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can do on a small scale. Today’s tools for analyzing social media text have limited ability to parse the meaning of human communication or detect the intent of the speaker. A knowledge gap exists between data scientists studying natural language processing (NLP) and policymakers advocating for wide adoption of automated social media analysis and moderation. Policymakers must understand the capabilities and limits of NLP before endorsing or adopting automated content analysis tools, particularly for making decisions that affect fundamental rights or access to government benefits. Without proper safeguards, these tools can facilitate overbroad censorship and biased enforcement of laws or terms of service. This paper draws on existing research to explain the capabilities and limitations of text classifiers for social media posts and other online content. It is aimed at helping researchers and technical experts address the gaps in policymakers’ knowledge about what is possible with automated text analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v81-duarte18a, title = {Mixed Messages? The Limits of Automated Social Media Content Analysis}, author = {Duarte, Natasha and Llanso, Emma and Loup, Anna}, booktitle = {Proceedings of the 1st Conference on Fairness, Accountability and Transparency}, pages = {106--106}, year = {2018}, editor = {Friedler, Sorelle A. and Wilson, Christo}, volume = {81}, series = {Proceedings of Machine Learning Research}, month = {23--24 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v81/duarte18a/duarte18a.pdf}, url = {https://proceedings.mlr.press/v81/duarte18a.html}, abstract = {Governments and companies are turning to automated tools to make sense of what people post on social media. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can do on a small scale. Today’s tools for analyzing social media text have limited ability to parse the meaning of human communication or detect the intent of the speaker. A knowledge gap exists between data scientists studying natural language processing (NLP) and policymakers advocating for wide adoption of automated social media analysis and moderation. Policymakers must understand the capabilities and limits of NLP before endorsing or adopting automated content analysis tools, particularly for making decisions that affect fundamental rights or access to government benefits. Without proper safeguards, these tools can facilitate overbroad censorship and biased enforcement of laws or terms of service. This paper draws on existing research to explain the capabilities and limitations of text classifiers for social media posts and other online content. It is aimed at helping researchers and technical experts address the gaps in policymakers’ knowledge about what is possible with automated text analysis.} }
Endnote
%0 Conference Paper %T Mixed Messages? The Limits of Automated Social Media Content Analysis %A Natasha Duarte %A Emma Llanso %A Anna Loup %B Proceedings of the 1st Conference on Fairness, Accountability and Transparency %C Proceedings of Machine Learning Research %D 2018 %E Sorelle A. Friedler %E Christo Wilson %F pmlr-v81-duarte18a %I PMLR %P 106--106 %U https://proceedings.mlr.press/v81/duarte18a.html %V 81 %X Governments and companies are turning to automated tools to make sense of what people post on social media. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can do on a small scale. Today’s tools for analyzing social media text have limited ability to parse the meaning of human communication or detect the intent of the speaker. A knowledge gap exists between data scientists studying natural language processing (NLP) and policymakers advocating for wide adoption of automated social media analysis and moderation. Policymakers must understand the capabilities and limits of NLP before endorsing or adopting automated content analysis tools, particularly for making decisions that affect fundamental rights or access to government benefits. Without proper safeguards, these tools can facilitate overbroad censorship and biased enforcement of laws or terms of service. This paper draws on existing research to explain the capabilities and limitations of text classifiers for social media posts and other online content. It is aimed at helping researchers and technical experts address the gaps in policymakers’ knowledge about what is possible with automated text analysis.
APA
Duarte, N., Llanso, E. & Loup, A.. (2018). Mixed Messages? The Limits of Automated Social Media Content Analysis. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research 81:106-106 Available from https://proceedings.mlr.press/v81/duarte18a.html.

Related Material