当前位置: X-MOL 学术J. Bus. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trust in Artificial Intelligence: Comparing Trust Processes Between Human and Automated Trustees in Light of Unfair Bias
Journal of Business and Psychology ( IF 3.7 ) Pub Date : 2022-06-28 , DOI: 10.1007/s10869-022-09829-9
Markus Langer , Cornelius J. König , Caroline Back , Victoria Hemsing

Automated systems based on artificial intelligence (AI) increasingly support decisions with ethical implications where decision makers need to trust these systems. However, insights regarding trust in automated systems predominantly stem from contexts where the main driver of trust is that systems produce accurate outputs (e.g., alarm systems for monitoring tasks). It remains unclear whether what we know about trust in automated systems translates to application contexts where ethical considerations (e.g., fairness) are crucial in trust development. In personnel selection, as a sample context where ethical considerations are important, we investigate trust processes in light of a trust violation relating to unfair bias and a trust repair intervention. Specifically, participants evaluated preselection outcomes (i.e., sets of preselected applicants) by either a human or an automated system across twelve selection tasks. We additionally varied information regarding imperfection of the human and automated system. In task rounds five through eight, the preselected applicants were predominantly male, thus constituting a trust violation due to potential unfair bias. Before task round nine, participants received an excuse for the biased preselection (i.e., a trust repair intervention). The results of the online study showed that participants have initially less trust in automated systems. Furthermore, the trust violation and the trust repair intervention had weaker effects for the automated system. Those effects were partly stronger when highlighting system imperfection. We conclude that insights from classical areas of automation only partially translate to the many emerging application contexts of such systems where ethical considerations are central to trust processes.



中文翻译:

对人工智能的信任:根据不公平的偏见比较人工受托人和自动受托人之间的信任过程

基于人工智能 (AI) 的自动化系统越来越多地支持具有道德影响的决策,决策者需要信任这些系统。然而,关于自动化系统信任的见解主要源于信任的主要驱动因素是系统产生准确输出的环境(例如,用于监控任务的警报系统)。目前尚不清楚我们对自动化系统信任的了解是否会转化为伦理考虑(例如,公平性)对信任发展至关重要的应用环境。在人员选择中,作为道德考虑很重要的一个示例环境,我们根据与不公平偏见和信任修复干预相关的信任违规来调查信任过程。具体来说,参与者评估预选结果(即,一组预选申请人)由一个人或一个自动化系统跨越十二个选择任务。我们在人工和自动化系统的缺陷方面提供了大量不同的信息。在第五轮到第八轮的任务中,预选的申请人主要是男性,因此由于潜在的不公平偏见而构成了信任违规。在第 9 轮任务之前,参与者得到了偏见预选的借口(即信任修复干预)。在线研究的结果表明,参与者最初对自动化系统的信任度较低。此外,信任违反和信任修复干预对自动化系统的影响较弱,当突出系统不完善时,这些影响部分更强。

更新日期:2022-06-28
down
wechat
bug