当前位置: X-MOL 学术Int. J. Hum. Comput. Interact. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Selfish or Utilitarian Automated Vehicles? Deontological Evaluation and Public Acceptance
International Journal of Human-Computer Interaction ( IF 4.7 ) Pub Date : 2021-02-04 , DOI: 10.1080/10447318.2021.1876357
Peng Liu 1 , Jinting Liu 1
Affiliation  

ABSTRACT

This research involves a controversial topic in the public sphere: should automated vehicles (AVs) be programmed with selfish algorithms to protect their passengers at all costs or utilitarian algorithms to minimize social loss in crashes involving moral dilemmas? Among a growing number of studies on what AVs should do in sacrificial dilemmas from the perspective of laypeople, few have considered how laypeople respond to AVs programmed with these crash algorithms. Our survey collected participants’ deontological evaluation (i.e., evaluations of the moral righteousness of the decisions made by these AVs and of adopting these AVs), their perceived benefit and risk of these AVs, and their behavioral intention to use and willingness to pay (WTP) a premium for these AVs. The participants (N = 580) perceived greater benefits from selfish AVs and reported a greater intention to use and higher WTP a premium for selfish AVs than utilitarian AVs. Deontological evaluation and perceived risk were non-significantly different between these AVs. Overall, selfish AVs were more acceptable to our participants. Deontological evaluation, perceived benefit, and perceived risk were predictive of behavioral intention. Additionally, after controlling for them, vehicle type still exerted a direct influence on behavioral intention. Perceived benefit was the dominant predictor of WTP a premium. Remarkably, participants expressed an insufficient intention to adopt both AVs, probably indicating that in regards to AV deployment, non-positive public attitudes toward AVs are more pressing than the challenge of deciding upon their ethical behaviors in rare moral dilemmas.



中文翻译:

自私还是功利的自动驾驶汽车?道义评估和公众接受

摘要

这项研究涉及公共领域的一个有争议的话题:应该使用自私的算法对自动驾驶汽车 (AV) 进行编程以不惜一切代价保护乘客,还是应使用实用的算法来最大限度地减少涉及道德困境的碰撞中的社会损失?在越来越多的关于从外行的角度来看自动驾驶汽车在牺牲困境中应该做什么的研究中,很少有人考虑外行人如何对使用这些碰撞算法编程的自动驾驶汽车做出反应。我们的调查收集了参与者的道义评估(即对这些自动驾驶汽车做出的决定和采用这些自动驾驶汽车的道德正义性的评价)、他们对这些自动驾驶汽车的感知收益和风险,以及他们的行为使用意愿和支付意愿(WTP)。 ) 这些 AV 的溢价。参与者(N= 580) 从自私的 AV 中感受到更大的好处,并报告说,与功利的 AV 相比,自私的 AV 有更大的使用意愿和更高的 WTP 溢价。这些 AV 之间的义务评估和感知风险没有显着差异。总的来说,我们的参与者更容易接受自私的 AV。道义评估、感知利益和感知风险是行为意图的预测。此外,在对它们进行控制后,车辆类型仍然对行为意向产生直接影响。感知收益是 WTP a 溢价的主要预测因素。值得注意的是,参与者表示没有足够的意愿采用这两种 AV,这可能表明在 AV 部署方面,

更新日期:2021-02-04
down
wechat
bug