当前位置: X-MOL 学术IEEE Trans. Veh. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Human-Like Model to Understand Surrounding Vehicles鈥 Lane Changing Intentions for Autonomous Driving
IEEE Transactions on Vehicular Technology ( IF 6.1 ) Pub Date : 2021-04-16 , DOI: 10.1109/tvt.2021.3073407
Yingji Xia , Zhaowei Qu , Zhe Sun , Zhihui Li

Autonomous vehicles have to share the road with human-driven ones for a prolonged period in the near future. Thus, they should have the ability to understand the lane changing intentions of surrounding human-driven vehicles. Unlike human drivers, the state-of-the-art models used in autonomous vehicles recognize a driver's lane changing intention based on the target vehicle's lateral movement, which leaves little to no time for the autonomous vehicle to react. In this paper, a Human-like Lane Changing Intention Understanding Model (HLCIUM) for autonomous driving is proposed to understand the lane changing intentions of surrounding vehicles. By imitating the selective attention mechanism of human vision systems, the proposed model emulates the way human drivers concentrate on the surrounding vehicles and recognizes their lane changing intentions accordingly. The velocity changes of the surrounding vehicles are treated as the lane changing hints, and the attention is drawn to the corresponding vehicle following a saliency-based scheme. Then, the lane changing intention is identified by a Hidden Markov Model (HMM) based intention recognizer. The proposed model is tested with Next Generation Simulation (NGSIM) vehicle trajectory dataset, and the proposed method has reached 90.89% in detecting lane changing intention and 88.58% in lane keeping in urban road scenarios, and reached 87.73% and 87.48% for the lane changing and lane keeping intentions in highway scenarios, respectively. Importantly, the average recognition time before the lane changing maneuver of the proposed model is 6.67 seconds for the urban road datasets and 7.08 seconds for the highway datasets, which is far earlier than state-of-the-art models. Furthermore, the proposed method shows efficiency and robustness in complex real urban traffic datasets, which is ideal to use in human-like autonomous driving systems.

中文翻译:


类人模型可了解周围车辆的自动驾驶变道意图



在不久的将来,自动驾驶汽车必须在很长一段时间内与人类驾驶的汽车共享道路。因此,他们应该有能力理解周围人类驾驶车辆的变道意图。与人类驾驶员不同,自动驾驶汽车中使用的最先进的模型根据目标车辆的横向运动来识别驾驶员的变道意图,这使得自动驾驶汽车几乎没有时间做出反应。本文提出了一种用于自动驾驶的类人变道意图理解模型(HLCIUM)来理解周围车辆的变道意图。通过模仿人类视觉系统的选择性注意机制,该模型模拟人类驾驶员专注于周围车辆的方式,并相应地识别他们的变道意图。周围车辆的速度变化被视为换道提示,并按照基于显着性的方案将注意力吸引到相应的车辆上。然后,基于隐马尔可夫模型(HMM)的意图识别器识别车道变换意图。该模型在下一代仿真(NGSIM)车辆轨迹数据集上进行测试,在城市道路场景下,该方法换道意图检测准确率达到90.89%,车道保持准确率达到88.58%,车道检测准确率达到87.73%和87.48%。分别在高速公路场景中改变和保持车道意图。重要的是,该模型在城市道路数据集上变道操作之前的平均识别时间为 6.67 秒,在高速公路数据集上为 7.08 秒,这远远早于最先进的模型。 此外,所提出的方法在复杂的真实城市交通数据集中显示出效率和鲁棒性,非常适合在类人自动驾驶系统中使用。
更新日期:2021-04-16
down
wechat
bug