当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Moving beyond generalization to accurate interpretation of flexible models
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2020-10-26 , DOI: 10.1038/s42256-020-00242-6
Mikhail Genkin 1 , Tatiana A Engel 1
Affiliation  

Machine learning optimizes flexible models to predict data. In scientific applications, there is a rising interest in interpreting these flexible models to derive hypotheses from data. However, it is unknown whether good data prediction guarantees the accurate interpretation of flexible models. Here, we test this connection using a flexible, yet intrinsically interpretable framework for modelling neural dynamics. We find that many models discovered during optimization predict data equally well, yet they fail to match the correct hypothesis. We develop an alternative approach that identifies models with correct interpretation by comparing model features across data samples to separate true features from noise. We illustrate our findings using recordings of spiking activity from the visual cortex of monkeys performing a fixation task. Our results reveal that good predictions cannot substitute for accurate interpretation of flexible models and offer a principled approach to identify models with correct interpretation.

A preprint version of the article is available at bioRxiv.


中文翻译:

超越泛化,准确解释灵活的模型

机器学习优化灵活的模型来预测数据。在科学应用中,人们越来越关注解释这些灵活的模型以从数据中得出假设。然而,尚不清楚良好的数据预测是否能保证灵活模型的准确解释。在这里,我们使用一个灵活但本质上可解释的神经动力学建模框架来测试这种联系。我们发现,在优化过程中发现的许多模型同样可以很好地预测数据,但它们无法匹配正确的假设。我们开发了一种替代方法,通过比较数据样本中的模型特征以将真实特征与噪声分开来识别具有正确解释的模型。我们使用执行注视任务的猴子视觉皮层的尖峰活动记录来说明我们的发现。

这篇文章的预印本可在 bioRxiv 上获得。
更新日期:2020-10-28
down
wechat
bug