Select and Optimize: Learning to solve large-scale TSP instances

Hanni Cheng, Haosi Zheng, Ya Cong, Weihao Jiang, Shiliang Pu
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:1219-1231, 2023.

Abstract

Learning-based algorithms to solve TSP are getting popular in recent years, but most existing works cannot solve very large-scale TSP instances within a limited time. To solve this problem, this paper introduces a creative and distinctive method to select and locally optimize sub-parts of a solution. Concretely, we design a novel framework to generalize a small-scale selector-and-optimizer network to large-scale TSP instances by iteratively selecting while optimizing one sub-problem. At each iteration, the running time of sub-problem sampling and selection is significantly reduced due to the full use of parallel computing. Our neural model is well-designed to exploit the characteristics of the sub-problems. Furthermore, we introduce a trick called destroy-and-repair to avoid the local minimum of the iterative algorithm from a global perspective. Extensive experiments show that our method accelerates state-of-the-art learning-based algorithms more than 2x while achieving better solution quality on large-scale TSP instances ranging in size from 200 to 20,000.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-cheng23a, title = {Select and Optimize: Learning to aolve large-scale TSP instances}, author = {Cheng, Hanni and Zheng, Haosi and Cong, Ya and Jiang, Weihao and Pu, Shiliang}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {1219--1231}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/cheng23a/cheng23a.pdf}, url = {https://proceedings.mlr.press/v206/cheng23a.html}, abstract = {Learning-based algorithms to solve TSP are getting popular in recent years, but most existing works cannot solve very large-scale TSP instances within a limited time. To solve this problem, this paper introduces a creative and distinctive method to select and locally optimize sub-parts of a solution. Concretely, we design a novel framework to generalize a small-scale selector-and-optimizer network to large-scale TSP instances by iteratively selecting while optimizing one sub-problem. At each iteration, the running time of sub-problem sampling and selection is significantly reduced due to the full use of parallel computing. Our neural model is well-designed to exploit the characteristics of the sub-problems. Furthermore, we introduce a trick called destroy-and-repair to avoid the local minimum of the iterative algorithm from a global perspective. Extensive experiments show that our method accelerates state-of-the-art learning-based algorithms more than 2x while achieving better solution quality on large-scale TSP instances ranging in size from 200 to 20,000.} }
Endnote
%0 Conference Paper %T Select and Optimize: Learning to solve large-scale TSP instances %A Hanni Cheng %A Haosi Zheng %A Ya Cong %A Weihao Jiang %A Shiliang Pu %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-cheng23a %I PMLR %P 1219--1231 %U https://proceedings.mlr.press/v206/cheng23a.html %V 206 %X Learning-based algorithms to solve TSP are getting popular in recent years, but most existing works cannot solve very large-scale TSP instances within a limited time. To solve this problem, this paper introduces a creative and distinctive method to select and locally optimize sub-parts of a solution. Concretely, we design a novel framework to generalize a small-scale selector-and-optimizer network to large-scale TSP instances by iteratively selecting while optimizing one sub-problem. At each iteration, the running time of sub-problem sampling and selection is significantly reduced due to the full use of parallel computing. Our neural model is well-designed to exploit the characteristics of the sub-problems. Furthermore, we introduce a trick called destroy-and-repair to avoid the local minimum of the iterative algorithm from a global perspective. Extensive experiments show that our method accelerates state-of-the-art learning-based algorithms more than 2x while achieving better solution quality on large-scale TSP instances ranging in size from 200 to 20,000.
APA
Cheng, H., Zheng, H., Cong, Y., Jiang, W. & Pu, S.. (2023). Select and Optimize: Learning to solve large-scale TSP instances. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:1219-1231 Available from https://proceedings.mlr.press/v206/cheng23a.html.

Related Material