Reinforcement Learning for Adaptive Mesh Refinement

Jiachen Yang, Tarik Dzanic, Brenden Petersen, Jun Kudo, Ketan Mittal, Vladimir Tomov, Jean-Sylvain Camier, Tuo Zhao, Hongyuan Zha, Tzanio Kolev, Robert Anderson, Daniel Faissol
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5997-6014, 2023.

Abstract

Finite element simulations of physical systems governed by partial differential equations (PDE) crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a challenge for RL as both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence can be deployed on larger simulations than those used at training time. We demonstrate in comprehensive experiments on static function estimation and time-dependent equations that RL policies can be trained on problems without using ground truth solutions, are competitive with a widely-used error estimator, and generalize to larger and unseen test problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-yang23e, title = {Reinforcement Learning for Adaptive Mesh Refinement}, author = {Yang, Jiachen and Dzanic, Tarik and Petersen, Brenden and Kudo, Jun and Mittal, Ketan and Tomov, Vladimir and Camier, Jean-Sylvain and Zhao, Tuo and Zha, Hongyuan and Kolev, Tzanio and Anderson, Robert and Faissol, Daniel}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {5997--6014}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/yang23e/yang23e.pdf}, url = {https://proceedings.mlr.press/v206/yang23e.html}, abstract = {Finite element simulations of physical systems governed by partial differential equations (PDE) crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a challenge for RL as both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence can be deployed on larger simulations than those used at training time. We demonstrate in comprehensive experiments on static function estimation and time-dependent equations that RL policies can be trained on problems without using ground truth solutions, are competitive with a widely-used error estimator, and generalize to larger and unseen test problems.} }
Endnote
%0 Conference Paper %T Reinforcement Learning for Adaptive Mesh Refinement %A Jiachen Yang %A Tarik Dzanic %A Brenden Petersen %A Jun Kudo %A Ketan Mittal %A Vladimir Tomov %A Jean-Sylvain Camier %A Tuo Zhao %A Hongyuan Zha %A Tzanio Kolev %A Robert Anderson %A Daniel Faissol %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-yang23e %I PMLR %P 5997--6014 %U https://proceedings.mlr.press/v206/yang23e.html %V 206 %X Finite element simulations of physical systems governed by partial differential equations (PDE) crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a challenge for RL as both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence can be deployed on larger simulations than those used at training time. We demonstrate in comprehensive experiments on static function estimation and time-dependent equations that RL policies can be trained on problems without using ground truth solutions, are competitive with a widely-used error estimator, and generalize to larger and unseen test problems.
APA
Yang, J., Dzanic, T., Petersen, B., Kudo, J., Mittal, K., Tomov, V., Camier, J., Zhao, T., Zha, H., Kolev, T., Anderson, R. & Faissol, D.. (2023). Reinforcement Learning for Adaptive Mesh Refinement. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:5997-6014 Available from https://proceedings.mlr.press/v206/yang23e.html.

Related Material