当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable and Trustworthy Artificial Intelligence [Guest Editorial]
IEEE Computational Intelligence Magazine ( IF 10.3 ) Pub Date : 2022-01-26 , DOI: 10.1109/mci.2021.3129953
Jose Maria Alonso-Moral , Corrado Mencar , Hisao Ishibuchi

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI is identified as a strategic technology and it is already part of our everyday life. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, in addition to the European General Data Protection Regulation (GDPR), a new European regulation on AI is in progress and it remarks once again the need to push for a human-centric responsible, explainable and trustworthy AI that empowers citizens to make more informed and thus better decisions. In addition, as remarked in the XAI challenge stated by the US Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, humankind requires a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

中文翻译:


可解释且值得信赖的人工智能 [客座社论]



在物联网和大数据时代,数据科学家需要从给定的数据中提取有价值的知识。他们首先分析、修复和预处理数据。然后,他们应用人工智能(AI)技术自动从数据中提取知识。事实上,人工智能被认为是一项战略技术,它已经成为我们日常生活的一部分。欧盟委员会表示,“因此,欧盟必须确保人工智能在适当的框架内开发和应用,以促进创新并尊重欧盟的价值观和基本权利以及问责制和透明度等道德原则”。它强调了可解释人工智能(简称XAI)的重要性,以开发符合欧洲价值观的人工智能:“为了进一步增强信任,人们还需要了解该技术是如何工作的,因此研究人工智能的可解释性非常重要系统”。此外,除了《欧洲通用数据保护条例》(GDPR)外,欧洲关于人工智能的新法规正在制定中,它再次强调需要推动以人为本的、负责任的、可解释的和值得信赖的人工智能,使公民能够创造更多知情,从而做出更好的决策。此外,正如美国国防高级研究计划局(DARPA)在 XAI 挑战中指出的那样,“尽管当前的人工智能系统在许多应用中提供了许多好处,但在与人类交互时缺乏解释能力,其有效性受到限制” 。因此,人类需要新一代的XAI系统。人们期望它们能够自然地与人类互动,从而为自动做出的决策提供易于理解的解释。
更新日期:2022-01-26
down
wechat
bug