Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators

Ze Cheng, Zhuoyu Li, Wang Xiaoqiang, Jianing Huang, Zhizhou Zhang, Zhongkai Hao, Hang Su
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:10090-10105, 2025.

Abstract

PDE-Constrained Optimization (PDECO) problems can be accelerated significantly by employing gradient-based methods with surrogate models like neural operators compared to traditional numerical solvers. However, this approach faces two key challenges: (1) Data inefficiency: Lack of efficient data sampling and effective training for neural operators, particularly for optimization purpose. (2) Instability: High risk of optimization derailment due to inaccurate neural operator predictions and gradients. To address these challenges, we propose a novel framework: (1) Optimization-oriented training: we leverage data from full steps of traditional optimization algorithms and employ a specialized training method for neural operators. (2) Enhanced derivative learning: We introduce a Virtual-Fourier layer to enhance derivative learning within the neural operator, a crucial aspect for gradient-based optimization. (3) Hybrid optimization: We implement a hybrid approach that integrates neural operators with numerical solvers, providing robust regularization for the optimization process. Our extensive experimental results demonstrate the effectiveness of our model in accurately learning operators and their derivatives. Furthermore, our hybrid optimization approach exhibits robust convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cheng25f, title = {Accelerating {PDE}-Constrained Optimization by the Derivative of Neural Operators}, author = {Cheng, Ze and Li, Zhuoyu and Xiaoqiang, Wang and Huang, Jianing and Zhang, Zhizhou and Hao, Zhongkai and Su, Hang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {10090--10105}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cheng25f/cheng25f.pdf}, url = {https://proceedings.mlr.press/v267/cheng25f.html}, abstract = {PDE-Constrained Optimization (PDECO) problems can be accelerated significantly by employing gradient-based methods with surrogate models like neural operators compared to traditional numerical solvers. However, this approach faces two key challenges: (1) Data inefficiency: Lack of efficient data sampling and effective training for neural operators, particularly for optimization purpose. (2) Instability: High risk of optimization derailment due to inaccurate neural operator predictions and gradients. To address these challenges, we propose a novel framework: (1) Optimization-oriented training: we leverage data from full steps of traditional optimization algorithms and employ a specialized training method for neural operators. (2) Enhanced derivative learning: We introduce a Virtual-Fourier layer to enhance derivative learning within the neural operator, a crucial aspect for gradient-based optimization. (3) Hybrid optimization: We implement a hybrid approach that integrates neural operators with numerical solvers, providing robust regularization for the optimization process. Our extensive experimental results demonstrate the effectiveness of our model in accurately learning operators and their derivatives. Furthermore, our hybrid optimization approach exhibits robust convergence.} }
Endnote
%0 Conference Paper %T Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators %A Ze Cheng %A Zhuoyu Li %A Wang Xiaoqiang %A Jianing Huang %A Zhizhou Zhang %A Zhongkai Hao %A Hang Su %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cheng25f %I PMLR %P 10090--10105 %U https://proceedings.mlr.press/v267/cheng25f.html %V 267 %X PDE-Constrained Optimization (PDECO) problems can be accelerated significantly by employing gradient-based methods with surrogate models like neural operators compared to traditional numerical solvers. However, this approach faces two key challenges: (1) Data inefficiency: Lack of efficient data sampling and effective training for neural operators, particularly for optimization purpose. (2) Instability: High risk of optimization derailment due to inaccurate neural operator predictions and gradients. To address these challenges, we propose a novel framework: (1) Optimization-oriented training: we leverage data from full steps of traditional optimization algorithms and employ a specialized training method for neural operators. (2) Enhanced derivative learning: We introduce a Virtual-Fourier layer to enhance derivative learning within the neural operator, a crucial aspect for gradient-based optimization. (3) Hybrid optimization: We implement a hybrid approach that integrates neural operators with numerical solvers, providing robust regularization for the optimization process. Our extensive experimental results demonstrate the effectiveness of our model in accurately learning operators and their derivatives. Furthermore, our hybrid optimization approach exhibits robust convergence.
APA
Cheng, Z., Li, Z., Xiaoqiang, W., Huang, J., Zhang, Z., Hao, Z. & Su, H.. (2025). Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:10090-10105 Available from https://proceedings.mlr.press/v267/cheng25f.html.

Related Material