当前位置: X-MOL 学术IEEE Trans. Hum. Mach. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Demand-Driven Transparency for Monitoring Intelligent Agents
IEEE Transactions on Human-Machine Systems ( IF 3.6 ) Pub Date : 2020-06-01 , DOI: 10.1109/thms.2020.2988859
Mor Vered , Piers Howe , Tim Miller , Liz Sonenberg , Eduardo Velloso

In autonomous multiagent or multirobotic systems, the ability to quickly and accurately respond to threats and uncertainties is important for both mission outcomes and survivability. Such systems are never truly autonomous, often operating as part of a human-agent team. Artificial intelligent agents (IAs) have been proposed as tools to help manage such teams; e.g., proposing potential courses of action to human operators. However, they are often underutilized due to a lack of trust. Designing transparent agents, who can convey at least some information regarding their internal reasoning processes, is considered an effective method of increasing trust. How people interact with such transparency information to gain situation awareness while avoiding information overload is currently an unexplored topic. In this article, we go part way to answering this question, by investigating two forms of transparency: sequential transparency, which requires people to step through the IA's explanation in a fixed order; and demand-driven transparency, which allows people to request information as needed. In an experiment using a multivehicle simulation, our results show that demand-driven interaction improves the operators’ trust in the system while maintaining, and at times improving, performancehttp://www.ieee.org/documents/taxonomy_v101.pdf."?> and usability.

中文翻译:

监控智能代理的需求驱动的透明度

在自主多智能体或多机器人系统中,快速准确地应对威胁和不确定性的能力对于任务结果和生存能力都很重要。这样的系统从来都不是真正自主的,通常作为人工代理团队的一部分运行。人工智能代理 (IA) 已被提议作为帮助管理此类团队的工具;例如,向操作人员提出潜在的行动方案。然而,由于缺乏资源,它们往往未被充分利用相信. 设计透明可以传达至少一些有关其内部推理过程的信息的代理被认为是增加信任的有效方法。人们如何相互作用使用这种透明信息来获得态势感知同时避免信息过载目前是一个未探索的话题。在本文中,我们通过研究两种形式的透明度来回答这个问题:连续透明,这要求人们以固定的顺序逐步完成 IA 的解释;和需求驱动的透明度,它允许人们根据需要请求信息。在使用多车辆模拟的实验中,我们的结果表明,需求驱动的交互提高了操作员对系统的信任,同时保持并有时提高了性能http://www.ieee.org/documents/taxonomy_v101.pdf。” ?> 和可用性。
更新日期:2020-06-01
down
wechat
bug