当前位置: X-MOL 学术Eur. Transp. Res. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimization of a physical internet based supply chain using reinforcement learning
European Transport Research Review ( IF 4.3 ) Pub Date : 2020-07-24 , DOI: 10.1186/s12544-020-00437-3
Eszter Puskás , Ádám Budai , Gábor Bohács

Physical Internet based supply chains create open, global logistics systems that enable new types of collaboration among participants. The open system allows the logistical examination of vehicle technology innovations such as the platooning concept. This article explores the multiple platoon collaboration. For the reconfiguration of two platoons a heuristic and a reinforcement learning (RL) based models have been developed. To our knowledge, this work is the first attempt to apply an RL-based decision model to solve the problem of controlling platoon cooperation. Vehicle exchange between platoons is provided by a virtual hub. Depending on the various input parameters, the efficiency of the model was examined through numerical examples in terms of the target function based on the transportation cost. Models using platoon reconfiguration are also compared to the cases where no vehicle exchange is implemented. We have found that a reinforcement learning based model provides a more efficient solution for high incoming vehicle numbers and low dispatch interval, although for low vehicle numbers heuristics model performs better.

中文翻译:

使用强化学习优化基于物理互联网的供应链

基于物理互联网的供应链创建了开放的全球物流系统,使参与者之间可以进行新型的协作。开放系统允许对车辆技术创新(例如排概念)进行后勤检查。本文探讨了多排协作。为了重新配置两个排,已经开发了基于启发式和强化学习(RL)的模型。就我们所知,这项工作是首次尝试使用基于RL的决策模型来解决控制排协作的问题。排之间的车辆交换由虚拟集线器提供。根据各种输入参数,通过基于运输成本的目标函数,通过数值示例检查了模型的效率。使用排重新配置的模型也与未实施车辆更换的情况进行了比较。我们发现,基于强化学习的模型为高传入车辆数量和低调度间隔提供了更有效的解决方案,尽管对于低车辆数量,启发式模型的效果更好。
更新日期:2020-07-24
down
wechat
bug