当前位置: X-MOL 学术arXiv.cs.LO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Foundations of Explainable Knowledge-Enabled Systems
arXiv - CS - Logic in Computer Science Pub Date : 2020-03-17 , DOI: arxiv-2003.07520
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.

中文翻译:

可解释的知识驱动系统的基础

自人工智能早期以来,可解释性一直是一个重要的目标。已经开发了几种产生解释的方法。然而,其中许多方法都与当时人工智能系统的能力紧密结合。随着人工智能系统在某些关键环境中的激增,需要向最终用户和决策者解释它们。我们对可解释的人工智能系统进行了历史概述,重点是知识支持系统,涵盖专家系统、认知助手、语义应用程序和机器学习领域。此外,借鉴过去方法的优势并确定以用户和上下文为中心的解释所需的差距,
更新日期:2020-03-20
down
wechat
bug