当前位置: X-MOL 学术IEEE Open J. Intell. Transp. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Reinforcement Learning Framework for Video Frame-Based Autonomous Car-Following
IEEE Open Journal of Intelligent Transportation Systems Pub Date : 2021-05-25 , DOI: 10.1109/ojits.2021.3083201
Mehdi Masmoudi , Hamdi Friji , Hakim Ghazzai , Yehia Massoud

Car-following theory has received considerable attention as a core component of Intelligent Transportation Systems. However, its application to the emerging autonomous vehicles (AVs) remains an unexplored research area. AVs are designed to provide convenient and safe driving by avoiding accidents caused by human errors. They require advanced levels of recognition of other drivers’ driving-style. With car-following models, AVs can use their built-in technology to understand the environment surrounding them and make real-time decisions to follow other vehicles. In this paper, we design an end-to-end car-following framework for AVs using automated object detection and navigation decision modules. The objective is to allow an AV to follow another vehicle based on Red Green Blue Depth (RGB-D) frames. We propose to employ a joint solution involving the You Look Once version 3 (YOLOv3) object detector to identify the leader vehicle and other obstacles and a reinforcement learning (RL) algorithm to navigate the self-driving vehicle. Two RL algorithms, namely Q-learning and Deep Q-learning have been investigated. Simulation results show the convergence of the developed models and investigate their efficiency in following the leader. It is shown that, with video frames only, promising results are achieved and that AVs can adopt a reasonable car-following behavior.

中文翻译:

基于视频帧的自主跟车的强化学习框架

汽车跟驰理论作为智能交通系统的核心组成部分受到了相当多的关注。然而,它在新兴的自动驾驶汽车 (AV) 中的应用仍然是一个未开发的研究领域。自动驾驶汽车旨在通过避免人为错误造成的事故,提供方便和安全的驾驶。他们需要对其他司机的驾驶风格有更高的认识。通过跟车模型,自动驾驶汽车可以使用其内置技术来了解周围的环境,并实时做出跟随其他车辆的决定。在本文中,我们使用自动对象检测和导航决策模块为 AV 设计了一个端到端的跟车框架。目标是允许自动驾驶汽车根据红绿蓝深度 (RGB-D) 帧跟随另一辆车。我们建议采用一种联合解决方案,包括 You Look Once version 3 (YOLOv3) 对象检测器来识别领导车辆和其他障碍物,并采用强化学习 (RL) 算法来导航自动驾驶车辆。已经研究了两种 RL 算法,即 Q-learning 和 Deep Q-learning。仿真结果显示了所开发模型的收敛性,并研究了它们跟随领导者的效率。结果表明,仅使用视频帧就可以取得有希望的结果,并且自动驾驶汽车可以采用合理的跟车行为。仿真结果显示了所开发模型的收敛性,并研究了它们跟随领导者的效率。结果表明,仅使用视频帧就可以取得有希望的结果,并且自动驾驶汽车可以采用合理的跟车行为。仿真结果显示了所开发模型的收敛性,并研究了它们跟随领导者的效率。结果表明,仅使用视频帧就可以取得有希望的结果,并且自动驾驶汽车可以采用合理的跟车行为。
更新日期:2021-06-04
down
wechat
bug