当前位置: X-MOL 学术IEEE Trans. Hum. Mach. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Individual Differences in Trust in Autonomous Robots: Implications for Transparency
IEEE Transactions on Human-Machine Systems ( IF 3.6 ) Pub Date : 2020-06-01 , DOI: 10.1109/thms.2019.2947592
Gerald Matthews , Jinchao Lin , April Rose Panganiban , Michael D. Long

The introduction of increasingly intelligent and autonomous systems raises novel human factors challenges for human–machine teaming. People utilize differing mental models in understanding the functioning of complex systems that may be capable of social agency. Operators may perceive the machine as either a complex tool or a humanlike teammate. When the “advanced tool” mental model is adopted, operator trust may reflect individual differences in expectations of automation. By contrast, when the “teammate” mental model is activated, trust may depend on evaluative attitudes to robots. This article investigates predictors of trust in an autonomous robot detecting threat on either a physics-based or psychological basis. Distinct dimensions of physics-based and psychological trust are identified, corresponding to advanced tool and team mental models, respectively. Dispositional perceptions of automation, measured with the perfect automation schema scale, are associated with both aspects of trust. By contrast, the negative attitudes toward robots scale is specifically associated with lower psychological trust. The findings suggest that transparency information should be designed for compatibility with the operator's mental model in order to support accurate trust calibration and situation awareness. Transparency may be personalized to emphasize either the machine's data-analytic capabilities (advanced tool) or its humanlike social functioning (teammate).

中文翻译:

自主机器人信任的个体差异:对透明度的影响

越来越智能和自主系统的引入为人机协作提出了新的人为因素挑战。人们利用不同的心理模型来理解可能具有社会代理能力的复杂系统的功能。操作员可能会将机器视为复杂的工具或类似人类的队友。当采用“先进工具”思维模型时,操作员的信任可能反映个体对自动化期望的差异。相比之下,当“队友”心智模型被激活时,信任可能取决于对机器人的评价态度。本文研究了基于物理或心理基础的自主机器人检测威胁的信任预测因素。确定了基于物理和心理信任的不同维度,对应于先进的工具和团队心理模型,分别。用完美的自动化模式量表衡量的自动化倾向感知与信任的两个方面相关。相比之下,对机器人规模的负面态度与较低的心理信任度特别相关。研究结果表明,透明度信息的设计应与操作员的心智模型兼容,以支持准确的信任校准和态势感知。透明度可以是个性化的,以强调机器的数据分析能力(高级工具)或其人性化的社交功能(队友)。对机器人规模的负面态度与较低的心理信任特别相关。研究结果表明,透明度信息的设计应与操作员的心智模型兼容,以支持准确的信任校准和态势感知。透明度可以是个性化的,以强调机器的数据分析能力(高级工具)或其人性化的社交功能(队友)。对机器人规模的负面态度与较低的心理信任特别相关。研究结果表明,透明度信息的设计应与操作员的心智模型兼容,以支持准确的信任校准和态势感知。透明度可以是个性化的,以强调机器的数据分析能力(高级工具)或其人性化的社交功能(队友)。
更新日期:2020-06-01
down
wechat
bug