当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable Goal-driven Agents and Robots - A Comprehensive Review
ACM Computing Surveys ( IF 16.6 ) Pub Date : 2023-02-02 , DOI: 10.1145/3564240
Fatai Sado , Chu Kiong Loo , Wei Shiung Liew 1 , Matthias Kerzel , Stefan Wermter 2
Affiliation  

Recent applications of autonomous agents and robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are ‘black boxes’, which renders their choices or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches to eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are sparse at this point in time. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents’ perceptual functions (e.g., senses, vision) and cognitive reasoning (e.g., beliefs, desires, intentions, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a road map for the possible realization of effective goal-driven explainable agents and robots.



中文翻译:

可解释的目标驱动代理和机器人——全面回顾

自主代理和机器人的最新应用引起了人们对与当前一代人工智能 (AI)系统相关的关键信任相关挑战的关注。基于联结主义深度学习神经网络方法的人工智能系统尽管取得了巨大成功,但仍缺乏向他人解释其决策和行动的能力。如果没有符号解释能力,它们就是“黑匣子”,这使得它们的选择或操作变得不透明,使得在安全关键应用程序中很难信任它们。最近关于 AI 系统可解释性的立场见证了几种可解释人工智能 (XAI)的方法; 然而,大多数研究都集中在计算科学中应用的数据驱动的 XAI 系统上。目前,针对日益普遍的目标驱动代理和机器人的研究很少。本文回顾了可解释的目标驱动智能代理和机器人的方法,侧重于解释和交流代理的感知功能(例如,感官,视觉)和认知推理(例如,信念,愿望,意图,计划和目标)的技术循环中的人类。该评论重点强调了强调透明度、可理解性和持续学习以实现可解释性的关键策略。最后,本文提出了可解释性要求,并提出了可能实现有效目标驱动的可解释代理和机器人的路线图。

更新日期:2023-02-02
down
wechat
bug