当前位置: X-MOL 学术arXiv.cs.GL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Foundations of Explainable Knowledge-Enabled Systems
arXiv - CS - General Literature Pub Date : 2020-03-17 , DOI: arxiv-2003.07520
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.

中文翻译:

可解释的基于知识的系统的基础

自人工智能诞生以来,可解释性就一直是重要的目标。已经开发出几种产生解释的方法。但是,这些方法中有许多当时都与人工智能系统的功能紧密结合在一起。随着有时在关键环境中启用AI的系统的普及,有必要让最终用户和决策者对它们进行解释。我们将介绍可解释的人工智能系统的历史概况,重点是知识支持的系统,涵盖专家系统,认知助手,语义应用和机器学习领域。此外,借鉴过去的方法的优势,并找出需要以用户和上下文为中心的解释所需要的空白,
更新日期:2020-03-18
down
wechat
bug