One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning

Doyoung Kim, Susik Yoon, Dongmin Park, Youngjun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:24658-24673, 2024.

Abstract

In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-kim24ai, title = {One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning}, author = {Kim, Doyoung and Yoon, Susik and Park, Dongmin and Lee, Youngjun and Song, Hwanjun and Bang, Jihwan and Lee, Jae-Gil}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {24658--24673}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ai/kim24ai.pdf}, url = {https://proceedings.mlr.press/v235/kim24ai.html}, abstract = {In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.} }
Endnote
%0 Conference Paper %T One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning %A Doyoung Kim %A Susik Yoon %A Dongmin Park %A Youngjun Lee %A Hwanjun Song %A Jihwan Bang %A Jae-Gil Lee %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-kim24ai %I PMLR %P 24658--24673 %U https://proceedings.mlr.press/v235/kim24ai.html %V 235 %X In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.
APA
Kim, D., Yoon, S., Park, D., Lee, Y., Song, H., Bang, J. & Lee, J.. (2024). One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:24658-24673 Available from https://proceedings.mlr.press/v235/kim24ai.html.

Related Material