Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trust in Artificial Intelligence: Meta-Analytic Findings
Human Factors: The Journal of the Human Factors and Ergonomics Society ( IF 2.9 ) Pub Date : 2021-05-28 , DOI: 10.1177/00187208211013988
Alexandra D Kaplan 1 , Theresa T Kessler 2 , J Christopher Brill 3 , P A Hancock 1
Affiliation  

Objective

The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction.

Background

There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI.

Method

Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors.

Results

Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others.

Conclusion

Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research.

Application

Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.



中文翻译:

对人工智能的信任:元分析结果

客观的

本荟萃分析试图确定预测人工智能 (AI) 信任度的重要因素。这些因素分为与 (a) 人类信任者、(b) AI 受托者和 (c) 他们互动的共享背景有关的因素。

背景

一般来说,有许多因素会影响对机器人、自动化和技术的信任,并且已经进行了多项元分析尝试来了解这些领域中信任的前因。然而,还没有进行有针对性的元分析来检查人工智能信任的前因。

方法

来自 65 篇文章的数据检查了三个预测类别,以及人类特征和能力、AI 性能和属性以及上下文任务的子类别。最后,将 AI 的四种常见用途(即聊天机器人、机器人、自动驾驶汽车和非具体化的普通算法)作为进一步的潜在调节因素进行了检查。

结果

结果表明,所有被检查的类别都是对 AI 信任度的重要预测因子,以及许多个体前因,例如 AI 可靠性和拟人化等。

结论

总的来说,这项元分析的结果确定了影响信任的几个因素,包括一些与人工智能性能无关的因素。此外,我们强调了目前没有实证研究的领域。

应用

该分析的结果将使设计人员能够根据需要构建可获得更高或更低信任度的系统。

更新日期:2021-05-30
down
wechat
bug