当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Numerical Transform of Random Forest Regressors corrects Systematically-Biased Predictions
arXiv - CS - Machine Learning Pub Date : 2020-03-16 , DOI: arxiv-2003.07445
Shipra Malhotra and John Karanicolas

Over the past decade, random forest models have become widely used as a robust method for high-dimensional data regression tasks. In part, the popularity of these models arises from the fact that they require little hyperparameter tuning and are not very susceptible to overfitting. Random forest regression models are comprised of an ensemble of decision trees that independently predict the value of a (continuous) dependent variable; predictions from each of the trees are ultimately averaged to yield an overall predicted value from the forest. Using a suite of representative real-world datasets, we find a systematic bias in predictions from random forest models. We find that this bias is recapitulated in simple synthetic datasets, regardless of whether or not they include irreducible error (noise) in the data, but that models employing boosting do not exhibit this bias. Here we demonstrate the basis for this problem, and we use the training data to define a numerical transformation that fully corrects it. Application of this transformation yields improved predictions in every one of the real-world and synthetic datasets evaluated in our study.

中文翻译:

随机森林回归器的数值变换校正系统偏差预测

在过去的十年中,随机森林模型已被广泛用作高维数据回归任务的稳健方法。在某种程度上,这些模型的流行源于它们几乎不需要超参数调整并且不太容易过度拟合。随机森林回归模型由一组决策树组成,这些决策树独立预测(连续)因变量的值;来自每棵树的预测最终被平均以产生来自森林的整体预测值。使用一组具有代表性的真实世界数据集,我们发现随机森林模型的预测存在系统偏差。我们发现这种偏差在简单的合成数据集中重现,无论它们是否在数据中包含不可减少的错误(噪声),但是采用 boosting 的模型没有表现出这种偏差。在这里,我们展示了这个问题的基础,我们使用训练数据来定义一个完全纠正它的数值变换。这种转换的应用在我们的研究中评估的每个真实世界和合成数据集中产生了改进的预测。
更新日期:2020-03-18
down
wechat
bug