Algorithmic Recourse for Long-Term Improvement

Kentaro Kanamori, Ken Kobayashi, Satoshi Hara, Takuya Takagi
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:28849-28877, 2025.

Abstract

Algorithmic recourse aims to provide a recourse action for altering an unfavorable prediction given by a model into a favorable one (e.g., loan approval). In practice, it is also desirable to ensure that an action makes the real-world outcome better (e.g., loan repayment). We call this requirement improvement. Unfortunately, existing methods cannot ensure improvement unless we know the true oracle. To address this issue, we propose a framework for suggesting improvement-oriented actions from a long-term perspective. Specifically, we introduce a new online learning task of assigning actions to a given sequence of instances. We assume that we can observe delayed feedback on whether the past suggested action achieved improvement. Using the feedback, we estimate an action that can achieve improvement for each instance. To solve this task, we propose two approaches based on contextual linear bandit and contextual Bayesian optimization. Experimental results demonstrated that our approaches could assign improvement-oriented actions to more instances than the existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kanamori25a, title = {Algorithmic Recourse for Long-Term Improvement}, author = {Kanamori, Kentaro and Kobayashi, Ken and Hara, Satoshi and Takagi, Takuya}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {28849--28877}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kanamori25a/kanamori25a.pdf}, url = {https://proceedings.mlr.press/v267/kanamori25a.html}, abstract = {Algorithmic recourse aims to provide a recourse action for altering an unfavorable prediction given by a model into a favorable one (e.g., loan approval). In practice, it is also desirable to ensure that an action makes the real-world outcome better (e.g., loan repayment). We call this requirement improvement. Unfortunately, existing methods cannot ensure improvement unless we know the true oracle. To address this issue, we propose a framework for suggesting improvement-oriented actions from a long-term perspective. Specifically, we introduce a new online learning task of assigning actions to a given sequence of instances. We assume that we can observe delayed feedback on whether the past suggested action achieved improvement. Using the feedback, we estimate an action that can achieve improvement for each instance. To solve this task, we propose two approaches based on contextual linear bandit and contextual Bayesian optimization. Experimental results demonstrated that our approaches could assign improvement-oriented actions to more instances than the existing methods.} }
Endnote
%0 Conference Paper %T Algorithmic Recourse for Long-Term Improvement %A Kentaro Kanamori %A Ken Kobayashi %A Satoshi Hara %A Takuya Takagi %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kanamori25a %I PMLR %P 28849--28877 %U https://proceedings.mlr.press/v267/kanamori25a.html %V 267 %X Algorithmic recourse aims to provide a recourse action for altering an unfavorable prediction given by a model into a favorable one (e.g., loan approval). In practice, it is also desirable to ensure that an action makes the real-world outcome better (e.g., loan repayment). We call this requirement improvement. Unfortunately, existing methods cannot ensure improvement unless we know the true oracle. To address this issue, we propose a framework for suggesting improvement-oriented actions from a long-term perspective. Specifically, we introduce a new online learning task of assigning actions to a given sequence of instances. We assume that we can observe delayed feedback on whether the past suggested action achieved improvement. Using the feedback, we estimate an action that can achieve improvement for each instance. To solve this task, we propose two approaches based on contextual linear bandit and contextual Bayesian optimization. Experimental results demonstrated that our approaches could assign improvement-oriented actions to more instances than the existing methods.
APA
Kanamori, K., Kobayashi, K., Hara, S. & Takagi, T.. (2025). Algorithmic Recourse for Long-Term Improvement. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:28849-28877 Available from https://proceedings.mlr.press/v267/kanamori25a.html.

Related Material