当前位置: X-MOL 学术Ecography › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable artificial intelligence enhances the ecological interpretability of black‐box species distribution models
Ecography ( IF 5.9 ) Pub Date : 2020-11-17 , DOI: 10.1111/ecog.05360
Masahiro Ryo 1, 2, 3 , Boyan Angelov 4 , Stefano Mammola 5, 6 , Jamie M. Kass 7 , Blas M. Benito 8 , Florian Hartig 9
Affiliation  

Species distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model‐agnostic explanation (LIME) to help interpret local‐scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.

中文翻译:

可解释的人工智能增强了黑匣子物种分布模型的生态解释性

物种分布模型(SDM)广泛用于生态学,生物地理学和保护生物学,以估计环境变量与物种发生数据之间的关系,并预测其分布在空间和时间上如何变化。在过去的二十年中,该领域越来越多地使用机器学习方法来构建和验证SDM。结果,模型的准确性一直在稳步提高,但是拟合模型的可解释性(例如,预测变量的相对重要性或其对焦点物种的因果影响)并未始终保持同步。在这里,我们提请人们注意新兴的人工智能子学科,可解释的AI(xAI),作为更好地解释SDM的工具箱。xAI旨在破译复杂的统计或机器学习模型(例如,神经网络,随机森林,增强型回归树),并且可以产生更加透明和易于理解的SDM预测。我们描述了xAI的基本原理,并提供了可用于帮助生态建模者更好地理解不同规模的复杂模型行为的工具列表。例如,我们在R上对非洲象进行了可重复的SDM分析,并展示了一些xAI工具,例如局部可解释模型不可知论解释(LIME),以帮助解释模型的局部行为。我们以我们认为这些技术的好处和警告结尾,并主张将其用于改善机器学习SDM的可解释性。我们描述了xAI的基本原理,并提供了可用于帮助生态建模者更好地理解不同规模的复杂模型行为的工具列表。例如,我们在R上对非洲象进行了可重复的SDM分析,并展示了一些xAI工具,例如本地可解释模型不可知论解释(LIME),以帮助解释模型的本地规模行为。我们以我们认为这些技术的优点和警告结尾,并主张将其用于改善机器学习SDM的可解释性。我们描述了xAI的基本原理,并提供了可用于帮助生态建模者更好地理解不同规模的复杂模型行为的工具列表。例如,我们在R上对非洲象进行了可重复的SDM分析,并展示了一些xAI工具,例如局部可解释模型不可知论解释(LIME),以帮助解释模型的局部行为。我们以我们认为这些技术的优点和警告结尾,并主张将其用于改善机器学习SDM的可解释性。我们在R上对非洲象进行了可重复的SDM分析,并展示了一些xAI工具,例如可本地解释的模型不可知论解释(LIME),以帮助解释模型的局部行为。我们以我们认为这些技术的好处和警告结尾,并主张将其用于改善机器学习SDM的可解释性。我们在R上对非洲象进行了可重复的SDM分析,并展示了一些xAI工具,例如可本地解释的模型不可知论解释(LIME),以帮助解释模型的局部行为。我们以我们认为这些技术的好处和警告结尾,并主张将其用于改善机器学习SDM的可解释性。
更新日期:2020-11-17
down
wechat
bug