当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Directions for Explainable Knowledge-Enabled Systems
arXiv - CS - Machine Learning Pub Date : 2020-03-17 , DOI: arxiv-2003.07523
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.

中文翻译:

可解释知识驱动系统的方向

几十年来,人们对可解释人工智能领域的兴趣一直在增长,最近又加速了。随着人工智能模型变得更加复杂,而且通常更加不透明,并且结合了复杂的机器学习技术,可解释性变得更加重要。最近,研究人员一直在研究和解决以用户为中心的可解释性问题,寻找考虑可信度、可理解性、明确出处和上下文感知的解释。在本章中,我们利用我们对人工智能和密切相关领域的解释文献的调查,并利用这些过去的努力来生成一组我们认为反映当今人工智能应用的解释扩展需求的解释类型。我们定义了每种类型并提供了一个示例问题,以激发对这种解释方式的需求。我们相信这组解释类型将帮助未来的系统设计人员生成需求并确定其优先级,并进一步帮助生成更符合用户和情境需求的解释。
更新日期:2020-03-18
down
wechat
bug