当前位置: X-MOL 学术WIREs Data Mining Knowl. Discov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Blockchain for explainable and trustworthy artificial intelligence
WIREs Data Mining and Knowledge Discovery ( IF 7.8 ) Pub Date : 2019-10-17 , DOI: 10.1002/widm.1340
Mohamed Nassar 1 , Khaled Salah 2 , Muhammad Habib ur Rehman 2 , Davor Svetinovic 2
Affiliation  

The increasing computational power and proliferation of big data are now empowering Artificial Intelligence (AI) to achieve massive adoption and applicability in many fields. The lack of explanation when it comes to the decisions made by today's AI algorithms is a major drawback in critical decision‐making systems. For example, deep learning does not offer control or reasoning over its internal processes or outputs. More importantly, current black‐box AI implementations are subject to bias and adversarial attacks that may poison the learning or the inference processes. Explainable AI (XAI) is a new trend of AI algorithms that provide explanations of their AI decisions. In this paper, we propose a framework for achieving a more trustworthy and XAI by leveraging features of blockchain, smart contracts, trusted oracles, and decentralized storage. We specify a framework for complex AI systems in which the decision outcomes are reached based on decentralized consensuses of multiple AI and XAI predictors. The paper discusses how our proposed framework can be utilized in key application areas with practical use cases.

中文翻译:

区块链可解释且值得信赖的人工智能

大数据的不断增长的计算能力和扩散现在使人工智能(AI)能够在许多领域获得广泛采用和适用性。对于当今的AI算法做出的决策缺乏解释是关键决策系统的主要缺陷。例如,深度学习不提供对其内部过程或输出的控制或推理。更重要的是,当前的黑匣子AI实现方式容易受到偏见和对抗性攻击,这可能会使学习或推理过程陷入困境。可解释性AI(XAI)是AI算法的新趋势,可提供有关AI决策的解释。在本文中,我们提出了一个利用区块链,智能合约,可信任的oracle和分散式存储的功能来实现更可信赖和XAI的框架。我们为复杂的AI系统指定了一个框架,其中基于多个AI和XAI预测变量的分散共识来达成决策结果。本文讨论了如何将我们提出的框架与实际用例一起用于关键应用领域。
更新日期:2019-10-17
down
wechat
bug