当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainability and Trust: A Promising Future [Editor鈥檚 Remarks]
IEEE Computational Intelligence Magazine ( IF 10.3 ) Pub Date : 2022-01-26 , DOI: 10.1109/mci.2021.3129948
Chuan-Kang Ting

Explainability is one of the most discussed AI topics in recent years, and with good reason. The complexity of systems powered by AI has increased to the level that humans are unable to understand how AI systems make decisions, which will prove an obstacle to AI adoption. Explanations behind AI’s decision-making processes can provide us with the trust we need. When AI is asked to show its work, this demanded explainability, usually including transparency, creates accountability for the developers of systems, further urging them to reconsider the models—improved AI-based systems are typically the result of developers’ reappraisal of the models. In addition, an AI system with explainable inference processes or results will make users more inclined to trust its recommendations, leading to increased usage and application. Explainability paints a promising future for AI indeed, where both developers and users will benefit from explainable and trustworthy AI-based systems.

中文翻译:


可解释性与信任:充满希望的未来[编者注]



可解释性是近年来讨论最多的人工智能话题之一,这是有充分理由的。人工智能驱动的系统的复杂性已经增加到人类无法理解人工智能系统如何做出决策的水平,这将成为人工智能采用的障碍。人工智能决策过程背后的解释可以为我们提供所需的信任。当人工智能被要求展示其工作时,这需要可解释性,通常包括透明度,这为系统开发人员创造了责任,进一步敦促他们重新考虑模型——改进的基于人工智能的系统通常是开发人员重新评估模型的结果。此外,具有可解释的推理过程或结果的人工智能系统将使用户更倾向于信任其建议,从而导致使用和应用的增加。可解释性确实为人工智能描绘了一个充满希望的未来,开发人员和用户都将从可解释且值得信赖的基于人工智能的系统中受益。
更新日期:2022-01-26
down
wechat
bug