Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trusting Automation: Designing for Responsivity and Resilience
Human Factors: The Journal of the Human Factors and Ergonomics Society ( IF 2.9 ) Pub Date : 2021-04-27 , DOI: 10.1177/00187208211009995
Erin K Chiou 1 , John D Lee 2
Affiliation  

Objective

This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments.

Background

Two recent trends—the development of increasingly capable automation and the flattening of organizational hierarchies—suggest a reframing of trust in automation is needed.

Method

Many publications related to human trust and human–automation interaction were integrated in this narrative literature review.

Results

Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity.

Conclusion

This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design.

Application

A conceptual model comprising four concepts—situation, semiotics, strategy, and sequence—can guide future trust research and design for automation responsivity and more resilient human–automation systems.



中文翻译:

信任自动化:为响应性和弹性而设计

客观的

本文回顾了最近与人类对自动化的信任相关的文章,以指导研究和设计,以在复杂的工作环境中实现越来越强大的自动化。

背景

最近的两个趋势——能力越来越强的自动化的发展和组织层次结构的扁平化——表明需要重新构建对自动化的信任。

方法

许多与人类信任和人机交互相关的出版物都纳入了这篇叙述性文献综述。

结果

许多研究都集中在校准人类信任以促进对自动化的适当依赖。这种方法忽略了日益强大的自动化和系统级结果的相关方面,例如合作和弹性。为了解决这些限制,我们采用了基于决策情况、符号学、交互顺序和策略的信任关系框架。这个关系框架强调目标不是最大化信任,甚至不是校准信任,而是通过自动化响应来支持信任的过程。

结论

这个框架阐明了为什么未来关于自动化信任的工作不仅应该考虑个体特征和自动化如何影响人,还应该考虑人如何影响自动化以及相互依赖的交互如何影响信任自动化。在这些将人类操作员转变为自动化合作者的新技术和组织环境中,自动化响应能力和解决冲突目标的能力可能比推进系统设计的可靠性和依赖性更重要。

应用

包含四个概念(情境、符号学、策略和序列)的概念模型可以指导未来的信任研究和设计,以实现自动化响应和更具弹性的人机自动化系统。

更新日期:2021-04-29
down
wechat
bug