当前位置: X-MOL 学术Ecol. Monogr. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A translucent box: interpretable machine learning in ecology
Ecological Monographs ( IF 7.1 ) Pub Date : 2020-06-30 , DOI: 10.1002/ecm.1422
Tim C. D. Lucas 1
Affiliation  

Machine learning has become popular in ecology but its use has remained restricted to predicting, rather than understanding, the natural world. Many researchers consider machine learning algorithms to be a black box. These models can, however, with careful examination, be used to inform our understanding of the world. They are translucent boxes. Furthermore, the interpretation of these models can be an important step in building confidence in a model or in a specific prediction from a model. Here I review a number of techniques for interpreting machine learning models at the level of the system, the variable, and the individual prediction as well as methods for handling non‐independent data. I also discuss the limits of interpretability for different methods and demonstrate these approaches using a case example of understanding litter sizes in mammals.

中文翻译:

半透明的盒子:生态学中可解释的机器学习

机器学习已在生态学中变得很流行,但其使用仍然仅限于预测自然世界,而不是理解自然世界。许多研究人员认为机器学习算法是一个黑匣子。但是,通过仔细检查,可以使用这些模型来告知我们对世界的理解。它们是半透明的盒子。此外,这些模型的解释可能是建立对模型或模型的特定预测的信心的重要步骤。在这里,我回顾了许多在系统,变量和个体预测级别解释机器学习模型的技术,以及处理非独立数据的方法。
更新日期:2020-06-30
down
wechat
bug