Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control

Zifan LIU, Xinran Li, Shibo Chen, Gen Li, Jiashuo Jiang, Jun Zhang
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1657-1665, 2025.

Abstract

Reinforcement learning (RL) has proven to be well-performed and versatile in inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded by two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to collect enough data and train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost-sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a training framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first leverage the MDP structure inherent in lost-sales IC problems and design the feedback graph (FG) tailored to lost-sales IC problems to generate abundant side experiences aiding in RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Guided by these insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG’s capability. Experimental results on single-item, multi-item, and multi-echelon environments demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at \url{https://github.com/Ziffer-byakuya/RLIMFG4IC}

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-liu25e, title = {Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control}, author = {LIU, Zifan and Li, Xinran and Chen, Shibo and Li, Gen and Jiang, Jiashuo and Zhang, Jun}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1657--1665}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/liu25e/liu25e.pdf}, url = {https://proceedings.mlr.press/v258/liu25e.html}, abstract = {Reinforcement learning (RL) has proven to be well-performed and versatile in inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded by two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to collect enough data and train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost-sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a training framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first leverage the MDP structure inherent in lost-sales IC problems and design the feedback graph (FG) tailored to lost-sales IC problems to generate abundant side experiences aiding in RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Guided by these insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG’s capability. Experimental results on single-item, multi-item, and multi-echelon environments demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at \url{https://github.com/Ziffer-byakuya/RLIMFG4IC}} }
Endnote
%0 Conference Paper %T Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control %A Zifan LIU %A Xinran Li %A Shibo Chen %A Gen Li %A Jiashuo Jiang %A Jun Zhang %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-liu25e %I PMLR %P 1657--1665 %U https://proceedings.mlr.press/v258/liu25e.html %V 258 %X Reinforcement learning (RL) has proven to be well-performed and versatile in inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded by two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to collect enough data and train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost-sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a training framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first leverage the MDP structure inherent in lost-sales IC problems and design the feedback graph (FG) tailored to lost-sales IC problems to generate abundant side experiences aiding in RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Guided by these insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG’s capability. Experimental results on single-item, multi-item, and multi-echelon environments demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at \url{https://github.com/Ziffer-byakuya/RLIMFG4IC}
APA
LIU, Z., Li, X., Chen, S., Li, G., Jiang, J. & Zhang, J.. (2025). Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1657-1665 Available from https://proceedings.mlr.press/v258/liu25e.html.

Related Material