Efficient Alignment of Large Language Models via Data Sampling

Amrit Khera, Rajat Ghosh, Debojyoti Dutta
Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, PMLR 262:55-72, 2024.

Abstract

Despite the capabilities of Large Language Models (LLMs), the output is not always safe or desirable. Aligning the models to human values is a critical step for the safe adoption of these models. Aligning LLMs employ huge amounts of data, computation, and time. Moreover, curating data with human feedback is expensive and takes time. Recent research depicts the benefit of data engineering in the fine-tuning and pre-training paradigms to bring down such costs. However, alignment differs from the afore-mentioned paradigms and it is unclear if data efficient alignment is feasible. In this work, we first aim to understand how the performance of LLM alignment scales with data. We find out that LLM alignment performance follows an exponential plateau pattern which tapers off post a rapid initial increase. We identify data subsampling as a viable method to reduce resources required for alignment. Further, we propose a methodology for efficient alignment by identifying a small high quality subset thereby reducing the computation and time required by alignment. We evaluate the proposed methodology over multiple datasets and compare the results. We find that the model aligned using our proposed methodology outperforms other sampling methods and performs comparable to the model aligned with the full dataset while using a fraction of the resources.

Cite this Paper


BibTeX
@InProceedings{pmlr-v262-khera24a, title = {Efficient Alignment of Large Language Models via Data Sampling}, author = {Khera, Amrit and Ghosh, Rajat and Dutta, Debojyoti}, booktitle = {Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop}, pages = {55--72}, year = {2024}, editor = {Rezagholizadeh, Mehdi and Passban, Peyman and Samiee, Soheila and Partovi Nia, Vahid and Cheng, Yu and Deng, Yue and Liu, Qun and Chen, Boxing}, volume = {262}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v262/main/assets/khera24a/khera24a.pdf}, url = {https://proceedings.mlr.press/v262/khera24a.html}, abstract = {Despite the capabilities of Large Language Models (LLMs), the output is not always safe or desirable. Aligning the models to human values is a critical step for the safe adoption of these models. Aligning LLMs employ huge amounts of data, computation, and time. Moreover, curating data with human feedback is expensive and takes time. Recent research depicts the benefit of data engineering in the fine-tuning and pre-training paradigms to bring down such costs. However, alignment differs from the afore-mentioned paradigms and it is unclear if data efficient alignment is feasible. In this work, we first aim to understand how the performance of LLM alignment scales with data. We find out that LLM alignment performance follows an exponential plateau pattern which tapers off post a rapid initial increase. We identify data subsampling as a viable method to reduce resources required for alignment. Further, we propose a methodology for efficient alignment by identifying a small high quality subset thereby reducing the computation and time required by alignment. We evaluate the proposed methodology over multiple datasets and compare the results. We find that the model aligned using our proposed methodology outperforms other sampling methods and performs comparable to the model aligned with the full dataset while using a fraction of the resources.} }
Endnote
%0 Conference Paper %T Efficient Alignment of Large Language Models via Data Sampling %A Amrit Khera %A Rajat Ghosh %A Debojyoti Dutta %B Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop %C Proceedings of Machine Learning Research %D 2024 %E Mehdi Rezagholizadeh %E Peyman Passban %E Soheila Samiee %E Vahid Partovi Nia %E Yu Cheng %E Yue Deng %E Qun Liu %E Boxing Chen %F pmlr-v262-khera24a %I PMLR %P 55--72 %U https://proceedings.mlr.press/v262/khera24a.html %V 262 %X Despite the capabilities of Large Language Models (LLMs), the output is not always safe or desirable. Aligning the models to human values is a critical step for the safe adoption of these models. Aligning LLMs employ huge amounts of data, computation, and time. Moreover, curating data with human feedback is expensive and takes time. Recent research depicts the benefit of data engineering in the fine-tuning and pre-training paradigms to bring down such costs. However, alignment differs from the afore-mentioned paradigms and it is unclear if data efficient alignment is feasible. In this work, we first aim to understand how the performance of LLM alignment scales with data. We find out that LLM alignment performance follows an exponential plateau pattern which tapers off post a rapid initial increase. We identify data subsampling as a viable method to reduce resources required for alignment. Further, we propose a methodology for efficient alignment by identifying a small high quality subset thereby reducing the computation and time required by alignment. We evaluate the proposed methodology over multiple datasets and compare the results. We find that the model aligned using our proposed methodology outperforms other sampling methods and performs comparable to the model aligned with the full dataset while using a fraction of the resources.
APA
Khera, A., Ghosh, R. & Dutta, D.. (2024). Efficient Alignment of Large Language Models via Data Sampling. Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, in Proceedings of Machine Learning Research 262:55-72 Available from https://proceedings.mlr.press/v262/khera24a.html.

Related Material