当前位置: X-MOL 学术WIREs Data Mining Knowl. Discov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A historical perspective of explainable Artificial Intelligence
WIREs Data Mining and Knowledge Discovery ( IF 7.8 ) Pub Date : 2020-10-19 , DOI: 10.1002/widm.1391
Roberto Confalonieri 1 , Ludovik Coba 1 , Benedikt Wagner 2 , Tarek R. Besold 3
Affiliation  

Explainability in Artificial Intelligence (AI) has been revived as a topic of active research by the need of conveying safety and trust to users in the “how” and “why” of automated decision‐making in different applications such as autonomous driving, medical diagnosis, or banking and finance. While explainability in AI has recently received significant attention, the origins of this line of work go back several decades to when AI systems were mainly developed as (knowledge‐based) expert systems. Since then, the definition, understanding, and implementation of explainability have been picked up in several lines of research work, namely, expert systems, machine learning, recommender systems, and in approaches to neural‐symbolic learning and reasoning, mostly happening during different periods of AI history. In this article, we present a historical perspective of Explainable Artificial Intelligence. We discuss how explainability was mainly conceived in the past, how it is understood in the present and, how it might be understood in the future. We conclude the article by proposing criteria for explanations that we believe will play a crucial role in the development of human‐understandable explainable systems.

中文翻译:

可解释的人工智能的历史视角

由于需要在自动驾驶,医疗诊断等不同应用中自动决策的“如何”和“为什么”向用户传达安全性和信任,因此人工智能(AI)的可解释性已成为积极研究的主题。 ,或银行和金融。尽管AI的可解释性最近受到了极大的关注,但这一工作的起源可以追溯到几十年前的AI系统主要是作为(基于知识的)专家系统开发的。从那时起,可解释性的定义,理解和实现已在几项研究工作中得到了应用,即专家系统,机器学习,推荐系统以及神经符号学习和推理方法,它们大多发生在不同时期AI历史。在这篇文章中,我们介绍了可解释人工智能的历史观点。我们讨论了过去主要是如何解释可解释性的,现在是如何理解的,将来会如何理解。在本文的结尾,我们提出了解释标准,我们认为这些标准将在人类可理解的解释系统的开发中发挥关键作用。
更新日期:2020-12-17
down
wechat
bug