当前位置: X-MOL 学术Transp. Res. Part A Policy Pract. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When both human and machine drivers make mistakes: Whom to blame?
Transportation Research Part A: Policy and Practice ( IF 6.4 ) Pub Date : 2023-03-06 , DOI: 10.1016/j.tra.2023.103637
Siming Zhai , Shan Gao , Lin Wang , Peng Liu

The advent of automated and algorithmic technology requires people to consider them when assigning responsibility for something going wrong. We focus on a focal question: who or what should be responsible when both human and machine drivers make mistakes in human–machine shared-control vehicles? We examined human judgments of responsibility for automated vehicle (AV) crashes (e.g., the 2018 Uber AV crash) caused by the distracted test driver and malfunctioning automated driving system, through a sequential mixed-methods design: a text analysis of public comments after the first trial of the Uber case (Study 1) and vignette-based experiment (Study 2). Studies 1 and 2 found that although people assigned more responsibility to the test driver than the car manufacturer, the car manufacturer is not clear of responsibility from their perspective, which is against the Uber case’s jury decision that the test driver was the only one facing criminal charges. Participants allocated equal responsibility to the normal driver and car manufacturer in Study 2. In Study 1, people gave different and sometimes antagonistic reasons for their judgments. Some commented that human drivers in AVs will inevitably feel bored and reduce vigilance and attention when the automated driving system is operating (called “passive error”), whereas others thought the test driver can keep attentive and should not be distracted (called “active error”). Study 2’s manipulation of passive and active errors, however, did not influence responsibility judgments significantly. Our results might offer insights for building a socially-acceptable framework for responsibility judgments for AV crashes.



中文翻译:

当人类和机器司机都犯错时:该怪谁?

自动化和算法技术的出现要求人们在分配出现问题的责任时考虑它们。我们关注一个焦点问题:当人类和机器驾驶员在人机共享控制车辆中犯错时,谁或什么应该负责?我们通过顺序混合方法设计,检查了由分心的测试驾驶员和故障的自动驾驶系统引起的自动驾驶车辆 (AV) 碰撞(例如 2018 年 Uber AV 碰撞)的人类责任判断:对公众评论的文本分析Uber 案例的第一次试验(研究 1)和基于小插图的实验(研究 2)。研究 1 和 2 发现,尽管人们将更多的责任分配给了试驾员与汽车制造商相比,汽车制造商从他们的角度来看并没有明确责任,这与优步案陪审团的裁决不符,即测试司机唯一面临刑事指控的人。在研究 2 中,参与者将同等责任分配给普通驾驶员和汽车制造商。在研究 1 中,人们对他们的判断给出了不同的、有时是对立的理由。有评论认为,无人驾驶汽车中的人类驾驶员在自动驾驶系统运行时不可避免地会感到无聊,降低警惕性和注意力(称为“被动错误”),而其他人则认为测试驾驶员可以保持专注,不应该分心(称为“主动错误”)。然而,研究 2 对被动和主动错误的操纵并没有显着影响责任判断。我们的结果可能会为建立一个社会可接受的 AV 事故责任判断框架提供见解。

更新日期:2023-03-07
down
wechat
bug