Unnatural Languages Are Not Bugs but Features for LLMs

Keyu Duan, Yiran Zhao, Zhili Feng, Jinjie Ni, Tianyu Pang, Qian Liu, Tianle Cai, Longxu Dou, Kenji Kawaguchi, Anirudh Goyal, J Zico Kolter, Michael Qizhe Shieh
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:14778-14792, 2025.

Abstract

Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving $49.71$ win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words. Our code is publicly available at https://github.com/John-AI-Lab/Unnatural_Language.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-duan25c, title = {Unnatural Languages Are Not Bugs but Features for {LLM}s}, author = {Duan, Keyu and Zhao, Yiran and Feng, Zhili and Ni, Jinjie and Pang, Tianyu and Liu, Qian and Cai, Tianle and Dou, Longxu and Kawaguchi, Kenji and Goyal, Anirudh and Kolter, J Zico and Shieh, Michael Qizhe}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {14778--14792}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/duan25c/duan25c.pdf}, url = {https://proceedings.mlr.press/v267/duan25c.html}, abstract = {Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving $49.71$ win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words. Our code is publicly available at https://github.com/John-AI-Lab/Unnatural_Language.} }
Endnote
%0 Conference Paper %T Unnatural Languages Are Not Bugs but Features for LLMs %A Keyu Duan %A Yiran Zhao %A Zhili Feng %A Jinjie Ni %A Tianyu Pang %A Qian Liu %A Tianle Cai %A Longxu Dou %A Kenji Kawaguchi %A Anirudh Goyal %A J Zico Kolter %A Michael Qizhe Shieh %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-duan25c %I PMLR %P 14778--14792 %U https://proceedings.mlr.press/v267/duan25c.html %V 267 %X Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving $49.71$ win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words. Our code is publicly available at https://github.com/John-AI-Lab/Unnatural_Language.
APA
Duan, K., Zhao, Y., Feng, Z., Ni, J., Pang, T., Liu, Q., Cai, T., Dou, L., Kawaguchi, K., Goyal, A., Kolter, J.Z. & Shieh, M.Q.. (2025). Unnatural Languages Are Not Bugs but Features for LLMs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:14778-14792 Available from https://proceedings.mlr.press/v267/duan25c.html.

Related Material