当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Transformer Network for the Traveling Salesman Problem
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.03012
Xavier Bresson, Thomas Laurent

The Traveling Salesman Problem (TSP) is the most popular and most studied combinatorial problem, starting with von Neumann in 1951. It has driven the discovery of several optimization techniques such as cutting planes, branch-and-bound, local search, Lagrangian relaxation, and simulated annealing. The last five years have seen the emergence of promising techniques where (graph) neural networks have been capable to learn new combinatorial algorithms. The main question is whether deep learning can learn better heuristics from data, i.e. replacing human-engineered heuristics? This is appealing because developing algorithms to tackle efficiently NP-hard problems may require years of research, and many industry problems are combinatorial by nature. In this work, we propose to adapt the recent successful Transformer architecture originally developed for natural language processing to the combinatorial TSP. Training is done by reinforcement learning, hence without TSP training solutions, and decoding uses beam search. We report improved performances over recent learned heuristics with an optimal gap of 0.004% for TSP50 and 0.39% for TSP100.

中文翻译:

旅行商问题的变压器网络

旅行推销员问题(TSP)是最流行和研究最多的组合问题,始于1951年的冯·诺依曼(von Neumann)。它推动了几种优化技术的发现,例如切割平面,分支定界,局部搜索,拉格朗日松弛,和模拟退火。在过去的五年中,出现了有前途的技术,其中(图)神经网络已经能够学习新的组合算法。主要问题是深度学习是否可以从数据中学习更好的启发式方法,即取代人工设计的启发式方法?之所以具有吸引力,是因为开发有效解决NP难题的算法可能需要多年的研究,而且许多行业问题本质上都是组合的。在这项工作中,我们建议将最初为自然语言处理而开发的最近成功的Transformer体系结构改编为组合式TSP。训练是通过强化学习来完成的,因此没有TSP训练解决方案,并且解码使用波束搜索。我们报告说,与最近学习的启发式算法相比,性能得到了改善,TSP50的最佳差距为0.004%,TSP100的最佳差距为0.39%。
更新日期:2021-03-05
down
wechat
bug