Switchable Decision: Dynamic Neural Generation Networks

Shujian Zhang, Korawat Tanwisuth, Chengyue Gong, Pengcheng He, Mingyuan Zhou
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:59919-59931, 2024.

Abstract

Auto-regressive generation models achieve competitive performance across many different NLP tasks such as summarization, question answering, and classifications. However, they are also known for being slow in inference, which makes them challenging to deploy in real-time applications. We propose a switchable decision to accelerate inference by dynamically assigning computation resources for each data instance. Automatically making decisions on where to skip and how to balance quality and computation cost with constrained optimization, our dynamic neural generation networks enforce the efficient inference path and determine the optimized trade-off. Experiments across question answering, summarization, and classification benchmarks show that our method benefits from less computation cost during inference while keeping the same accuracy. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhang24bj, title = {Switchable Decision: Dynamic Neural Generation Networks}, author = {Zhang, Shujian and Tanwisuth, Korawat and Gong, Chengyue and He, Pengcheng and Zhou, Mingyuan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {59919--59931}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24bj/zhang24bj.pdf}, url = {https://proceedings.mlr.press/v235/zhang24bj.html}, abstract = {Auto-regressive generation models achieve competitive performance across many different NLP tasks such as summarization, question answering, and classifications. However, they are also known for being slow in inference, which makes them challenging to deploy in real-time applications. We propose a switchable decision to accelerate inference by dynamically assigning computation resources for each data instance. Automatically making decisions on where to skip and how to balance quality and computation cost with constrained optimization, our dynamic neural generation networks enforce the efficient inference path and determine the optimized trade-off. Experiments across question answering, summarization, and classification benchmarks show that our method benefits from less computation cost during inference while keeping the same accuracy. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.} }
Endnote
%0 Conference Paper %T Switchable Decision: Dynamic Neural Generation Networks %A Shujian Zhang %A Korawat Tanwisuth %A Chengyue Gong %A Pengcheng He %A Mingyuan Zhou %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhang24bj %I PMLR %P 59919--59931 %U https://proceedings.mlr.press/v235/zhang24bj.html %V 235 %X Auto-regressive generation models achieve competitive performance across many different NLP tasks such as summarization, question answering, and classifications. However, they are also known for being slow in inference, which makes them challenging to deploy in real-time applications. We propose a switchable decision to accelerate inference by dynamically assigning computation resources for each data instance. Automatically making decisions on where to skip and how to balance quality and computation cost with constrained optimization, our dynamic neural generation networks enforce the efficient inference path and determine the optimized trade-off. Experiments across question answering, summarization, and classification benchmarks show that our method benefits from less computation cost during inference while keeping the same accuracy. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.
APA
Zhang, S., Tanwisuth, K., Gong, C., He, P. & Zhou, M.. (2024). Switchable Decision: Dynamic Neural Generation Networks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:59919-59931 Available from https://proceedings.mlr.press/v235/zhang24bj.html.

Related Material