当前位置: X-MOL 学术IEEE Internet Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?
IEEE Internet Computing ( IF 3.7 ) Pub Date : 2021-02-18 , DOI: 10.1109/mic.2020.3031769
Manas Gaur 1 , Keyur Faldu 2 , Amit Sheth 3
Affiliation  

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.

中文翻译:

黑匣子的语义:知识图谱能否帮助使深度学习系统更具可解释性和解释性?

最近在深度学习(DL)中进行的一系列创新显示了对个人和社会产生积极和消极影响的巨大潜力。利用海量计算能力和海量数据集的DL模型在计算机视觉,自然语言处理和人机交互等技术领域日益困难,定义明确的研究任务上,已大大超越了以往的历史基准。但是,DL的黑匣子性质以及对大量数据的过度依赖会浓缩为标签和密集的表示形式,这给可解释性和可解释性带来了挑战。此外,DL尚未证明其有效利用对人类理解至关重要的相关领域知识的能力。早期的以数据为中心的方法缺少这一方面,因此必须将知识注入学习(K-iL)纳入计算知识。本文演示了如何使用K-iL将知识(作为知识图提供)整合到DL中。通过自然语言处理在医疗保健和教育中的应用示例,我们讨论了K-iL在解释性和解释性方面的实用性。
更新日期:2021-02-19
down
wechat
bug