当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Adaptive Trust Calibration for Level 2 Driving Automation
arXiv - CS - Robotics Pub Date : 2020-09-24 , DOI: arxiv-2009.11890
Kumar Akash, Neera Jain, Teruhisa Misu

Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.

中文翻译:

面向 2 级驾驶自动化的自适应信任校准

正确校准的人类信任对于人类与自动化之间的成功交互至关重要。然而,虽然可以通过提高自动化透明度来改善人类信任校准,但过多的透明度会压倒人类的工作量。为了解决这种权衡问题,我们提出了一个概率框架,该框架使用部分可观察的马尔可夫决策过程 (POMDP),用于在动作自动化环境中对人类行为的耦合信任工作负载动态进行建模。我们特别考虑了在涉及多个十字路口的城市环境中的自动驾驶 2 级自动化,在这种环境中,人类选择是否依赖自动化。我们考虑自动化可靠性、自动化透明度和场景复杂性,以及人类的依赖和眼睛注视行为,以模拟人类信任和工作量的动态。
更新日期:2020-09-28
down
wechat
bug