LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models

Tianci Liu, Haoyu Wang, Shiyang Wang, Yu Cheng, Jing Gao
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32083-32099, 2024.

Abstract

Large language models (LLMs) have achieved impressive performance on various natural language generation tasks. Nonetheless, they suffer from generating negative and harmful contents that are biased against certain demographic groups (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable fairness-fluency trade-offs. However, it is still under-explored to what extent the fluency has to be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a general framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is also taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-liu24bm, title = {{LIDAO}: Towards Limited Interventions for Debiasing ({L}arge) Language Models}, author = {Liu, Tianci and Wang, Haoyu and Wang, Shiyang and Cheng, Yu and Gao, Jing}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32083--32099}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bm/liu24bm.pdf}, url = {https://proceedings.mlr.press/v235/liu24bm.html}, abstract = {Large language models (LLMs) have achieved impressive performance on various natural language generation tasks. Nonetheless, they suffer from generating negative and harmful contents that are biased against certain demographic groups (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable fairness-fluency trade-offs. However, it is still under-explored to what extent the fluency has to be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a general framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is also taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.} }
Endnote
%0 Conference Paper %T LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models %A Tianci Liu %A Haoyu Wang %A Shiyang Wang %A Yu Cheng %A Jing Gao %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-liu24bm %I PMLR %P 32083--32099 %U https://proceedings.mlr.press/v235/liu24bm.html %V 235 %X Large language models (LLMs) have achieved impressive performance on various natural language generation tasks. Nonetheless, they suffer from generating negative and harmful contents that are biased against certain demographic groups (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable fairness-fluency trade-offs. However, it is still under-explored to what extent the fluency has to be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a general framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is also taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.
APA
Liu, T., Wang, H., Wang, S., Cheng, Y. & Gao, J.. (2024). LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32083-32099 Available from https://proceedings.mlr.press/v235/liu24bm.html.

Related Material