当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Making deep neural networks right for the right scientific reasons by interacting with their explanations
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2020-08-12 , DOI: 10.1038/s42256-020-0212-3
Patrick Schramowski , Wolfgang Stammer , Stefano Teso , Anna Brugger , Franziska Herbert , Xiaoting Shao , Hans-Georg Luigs , Anne-Katrin Mahlein , Kristian Kersting

Deep neural networks have demonstrated excellent performances in many real-world applications. Unfortunately, they may show Clever Hans-like behaviour (making use of confounding factors within datasets) to achieve high performance. In this work we introduce the novel learning setting of explanatory interactive learning and illustrate its benefits on a plant phenotyping research task. Explanatory interactive learning adds the scientist into the training loop, who interactively revises the original model by providing feedback on its explanations. Our experimental results demonstrate that explanatory interactive learning can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust in the underlying model.

A preprint version of the article is available at ArXiv.


中文翻译:

通过与神经网络的解释进行互动,使正确的科学原因使深度神经网络正确

深度神经网络已经在许多实际应用中展示了出色的性能。不幸的是,它们可能表现出类似汉斯的聪明行为(利用数据集中的混杂因素)来实现高性能。在这项工作中,我们介绍了解释性交互式学习的新颖学习环境,并说明了其对植物表型研究任务的好处。解释性交互式学习将科学家添加到了训练循环中,后者通过提供对模型解释的反馈来交互式地修改原始模型。我们的实验结果表明,解释性交互式学习可以帮助避免机器学习中的聪明汉斯时刻,并鼓励(或劝阻,如果合适的话)信任基础模型。

该文章的预印本可从ArXiv获得。
更新日期:2020-08-14
down
wechat
bug