当前位置: X-MOL 学术Basic Appl. Ecol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Testing prediction accuracy in short-term ecological studies
Basic and Applied Ecology ( IF 3.8 ) Pub Date : 2020-03-01 , DOI: 10.1016/j.baae.2020.01.003
Connor M. Wood , Zachary G. Loman , Shawn T. McKinney , Cynthia S. Loftin

Abstract Applied ecology is based on an assumption that a management action will result in a predicted outcome. Testing the prediction accuracy of ecological models is the most powerful way of evaluating the knowledge implicit in this cause-effect relationship, however, the prevalence of predictive modeling and prediction testing are spreading slowly in ecology. The challenge of prediction testing is particularly acute for small-scale studies, because withholding data for prediction testing (e.g., via k-fold cross validation) can reduce model precision. However, by necessity small-scale studies are common. We use one such study that explored small mammal abundance along an elevational gradient to test prediction accuracy of models with varying degrees of information content. For each of three small mammal species, we conducted 5000 iterations of the following process: (1) randomly selected 75 % of the data to develop generalized linear models of species abundance that used detailed site measurements as covariates, (2) used an information theoretic approach to compare the top model with detailed covariates to habitat type-only and null models constructed with the same data, (3) tested those models’ ability to predict the 25 % of the randomly withheld data, and (4) evaluated prediction accuracy with a quadratic loss function. Detailed models fit the model-evaluation data best but had greater expected prediction error when predicting out-of-sample data relative to the habitat type models. Relationships between species and detailed site variables may be evident only within the framework of explicitly hierarchical analyses. We show that even with a small but relatively typical dataset (n = 28 sampling locations across 125 km over two years), researchers can effectively compare models with different information content and measure models’ predictive power, thus evaluating their own ecological understanding and defining the limits of their inferences. Identifying the appropriate scope of inference through prediction testing is ecologically valuable and is attainable even with small datasets.

中文翻译:

在短期生态研究中测试预测准确性

摘要 应用生态学基于这样一种假设,即管理行为将导致预测结果。测试生态模型的预测准确性是评估这种因果关系中隐含的知识的最有力方法,然而,预测建模和预测测试的流行在生态学中传播缓慢。对于小规模研究而言,预测检验的挑战尤为严峻,因为保留用于预测检验的数据(例如,通过 k 折交叉验证)会降低模型精度。然而,小规模研究很常见。我们使用了一项这样的研究,该研究沿着海拔梯度探索了小型哺乳动物的丰度,以测试具有不同程度信息内容的模型的预测准确性。对于三种小型哺乳动物中的每一种,我们对以下过程进行了 5000 次迭代:(1) 随机选择 75% 的数据来开发物种丰度的广义线性模型,该模型使用详细的现场测量作为协变量,(2) 使用信息论方法将顶级模型与详细使用相同数据构建的仅栖息地类型和空模型的协变量,(3) 测试了这些模型预测 25% 的随机隐瞒数据的能力,以及 (4) 使用二次损失函数评估预测准确性。详细模型最适合模型评估数据,但在预测相对于栖息地类型模型的样本外数据时具有更大的预期预测误差。只有在明确的层次分析框架内,物种和详细地点变量之间的关系才可能是明显的。我们表明,即使使用较小但相对典型的数据集(两年内 125 公里的 n = 28 个采样位置),研究人员也可以有效地比较具有不同信息内容的模型并测量模型的预测能力,从而评估他们自己的生态理解并定义他们推论的极限。通过预测测试确定适当的推理范围具有生态价值,即使使用小数据集也是可以实现的。
更新日期:2020-03-01
down
wechat
bug