当前位置: X-MOL 学术IEEE Trans. Hum. Mach. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How Does Explanation-Based Knowledge Influence Driver Take-Over in Conditional Driving Automation?
IEEE Transactions on Human-Machine Systems ( IF 3.5 ) Pub Date : 2021-02-10 , DOI: 10.1109/thms.2021.3051342
Huiping Zhou , Makoto Itoh , Satoshi Kitazaki

This article focuses on explanation-based knowledge about system limitations (SLs) under conditional driving automation (society of automotive engineers level 3) and aims to reveal how this knowledge influences driver intervention. By illustrating the relationships between the driving environment, system, and mental model, knowledge in dynamic decision-making processing for responding to an issued request to intervene (RtI), occurrence of SL, concept of RtI, and scene(s) related to SL are determined by knowledge-based learning. Based on three concepts, the knowledge is examined at five levels: 1) no explanation, 2) occurrence of SL, 3) concept of RtI, 4) some typical scenes related to SL, and 5) all of the above. Data collection is conducted on a driving simulator, and 100 people with no experience of automated driving participated. The experimental results show that instructing drivers in typical situations contributes to a greater increase in the rate of successful intervention in car control from 55% to 95%. Furthermore, instructing them on the concept of RtI is conducive to a significant reduction in response time from 5.48 to 3.62 s in their first experience of RtI. It is also revealed that the knowledge-based learning effect dwindles but does not vanish even after drivers experience RtI a number of times. Compared to explaining all possible situations to a driver, introducing typical situations results in better take-over performances even in critical or unexplained scenarios. This article demonstrates the importance and necessity of this knowledge, especially the explanation of sample scenes related to SL, which contributes to drivers' take-over behavior.

中文翻译:


基于解释的知识如何影响条件驾驶自动化中的驾驶员接管?



本文重点关注有条件驾驶自动化(汽车工程师协会 3 级)下有关系统限制 (SL) 的基于解释的知识,旨在揭示这些知识如何影响驾驶员干预。通过说明驾驶环境、系统和心智模型之间的关系、响应发出的干预请求 (RtI) 的动态决策处理知识、SL 的发生、RtI 的概念以及与 SL 相关的场景是由基于知识的学习决定的。基于三个概念,知识在五个层面上进行检验:1)无解释,2)SL的发生,3)RtI的概念,4)与SL相关的一些典型场景,以及5)以上所有。数据收集是在驾驶模拟器上进行的,有 100 名没有自动驾驶经验的人参与。实验结果表明,在典型情况下指导驾驶员有助于将汽车控制成功干预率从 55% 大幅提高到 95%。此外,指导他们 RtI 的概念有利于将他们第一次体验 RtI 时的响应时间从 5.48 秒显着缩短至 3.62 秒。研究还表明,即使驾驶员多次经历 RtI,基于知识的学习效果也会减弱,但不会消失。与向驾驶员解释所有可能的情况相比,即使在关键或无法解释的情况下,引入典型情况也能带来更好的接管性能。本文论证了这些知识的重要性和必要性,特别是对SL相关示例场景的解释,有助于驾驶员的接管行为。
更新日期:2021-02-10
down
wechat
bug