Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang, Seonghyeon Ye, Minjoon Seo
Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop, PMLR 203:52-62, 2023.

Abstract

Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with \textit{negated} prompts, but instead shows an \textit{inverse} scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v203-jang23a, title = {Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts}, author = {Jang, Joel and Ye, Seonghyeon and Seo, Minjoon}, booktitle = {Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop}, pages = {52--62}, year = {2023}, editor = {Albalak, Alon and Zhou, Chunting and Raffel, Colin and Ramachandran, Deepak and Ruder, Sebastian and Ma, Xuezhe}, volume = {203}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v203/jang23a/jang23a.pdf}, url = {https://proceedings.mlr.press/v203/jang23a.html}, abstract = {Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with \textit{negated} prompts, but instead shows an \textit{inverse} scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms.} }
Endnote
%0 Conference Paper %T Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts %A Joel Jang %A Seonghyeon Ye %A Minjoon Seo %B Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop %C Proceedings of Machine Learning Research %D 2023 %E Alon Albalak %E Chunting Zhou %E Colin Raffel %E Deepak Ramachandran %E Sebastian Ruder %E Xuezhe Ma %F pmlr-v203-jang23a %I PMLR %P 52--62 %U https://proceedings.mlr.press/v203/jang23a.html %V 203 %X Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with \textit{negated} prompts, but instead shows an \textit{inverse} scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at https://github.com/joeljang/negated-prompts-for-llms.
APA
Jang, J., Ye, S. & Seo, M.. (2023). Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts. Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop, in Proceedings of Machine Learning Research 203:52-62 Available from https://proceedings.mlr.press/v203/jang23a.html.

Related Material