Boosting Dynamic Programming with Neural Networks for Solving NP-hard Problems

Feidiao Yang, Tiancheng Jin, Tie-Yan Liu, Xiaoming Sun, Jialin Zhang
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:726-739, 2018.

Abstract

Dynamic programming is a powerful method for solving combinatorial optimization problems. However, it does not always work well, particularly for some NP-hard problems having extremely large state spaces. In this paper, we propose an approach to boost the capability of dynamic programming with neural networks. First, we replace the conventional tabular method with neural networks of polynomial sizes to approximately represent dynamic programming functions. And then we design an iterative algorithm to train the neural network with data generated from a solution reconstruction process. Our method combines the approximating ability and flexibility of neural networks and the advantage of dynamic programming in utilizing intrinsic properties of a problem. This approach can significantly reduce the space complexity and it is flexible in balancing space, running time, and accuracy. We apply the method to the Travelling Salesman Problem (TSP). The experimental results show that our approach can solve larger problems that are intractable for conventional dynamic programming and the performances are near optimal, outperforming the well-known approximation algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-yang18a, title = {Boosting Dynamic Programming with Neural Networks for Solving NP-hard Problems}, author = {Yang, Feidiao and Jin, Tiancheng and Liu, Tie-Yan and Sun, Xiaoming and Zhang, Jialin}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {726--739}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/yang18a/yang18a.pdf}, url = {https://proceedings.mlr.press/v95/yang18a.html}, abstract = {Dynamic programming is a powerful method for solving combinatorial optimization problems. However, it does not always work well, particularly for some NP-hard problems having extremely large state spaces. In this paper, we propose an approach to boost the capability of dynamic programming with neural networks. First, we replace the conventional tabular method with neural networks of polynomial sizes to approximately represent dynamic programming functions. And then we design an iterative algorithm to train the neural network with data generated from a solution reconstruction process. Our method combines the approximating ability and flexibility of neural networks and the advantage of dynamic programming in utilizing intrinsic properties of a problem. This approach can significantly reduce the space complexity and it is flexible in balancing space, running time, and accuracy. We apply the method to the Travelling Salesman Problem (TSP). The experimental results show that our approach can solve larger problems that are intractable for conventional dynamic programming and the performances are near optimal, outperforming the well-known approximation algorithms.} }
Endnote
%0 Conference Paper %T Boosting Dynamic Programming with Neural Networks for Solving NP-hard Problems %A Feidiao Yang %A Tiancheng Jin %A Tie-Yan Liu %A Xiaoming Sun %A Jialin Zhang %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-yang18a %I PMLR %P 726--739 %U https://proceedings.mlr.press/v95/yang18a.html %V 95 %X Dynamic programming is a powerful method for solving combinatorial optimization problems. However, it does not always work well, particularly for some NP-hard problems having extremely large state spaces. In this paper, we propose an approach to boost the capability of dynamic programming with neural networks. First, we replace the conventional tabular method with neural networks of polynomial sizes to approximately represent dynamic programming functions. And then we design an iterative algorithm to train the neural network with data generated from a solution reconstruction process. Our method combines the approximating ability and flexibility of neural networks and the advantage of dynamic programming in utilizing intrinsic properties of a problem. This approach can significantly reduce the space complexity and it is flexible in balancing space, running time, and accuracy. We apply the method to the Travelling Salesman Problem (TSP). The experimental results show that our approach can solve larger problems that are intractable for conventional dynamic programming and the performances are near optimal, outperforming the well-known approximation algorithms.
APA
Yang, F., Jin, T., Liu, T., Sun, X. & Zhang, J.. (2018). Boosting Dynamic Programming with Neural Networks for Solving NP-hard Problems. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:726-739 Available from https://proceedings.mlr.press/v95/yang18a.html.

Related Material