当前位置: X-MOL 学术J. Bank. Financ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data Snooping Bias in Tests of the Relative Performance of Multiple Forecasting Models
Journal of Banking & Finance ( IF 3.539 ) Pub Date : 2021-03-11 , DOI: 10.1016/j.jbankfin.2021.106113
Dan Gabriel Anghel

Tests of the relative performance of multiple forecasting models are sensitive to how the set of alternatives is defined. Evaluating one model against a particular set may show that it has superior predictive ability. However, changing the number or type of alternatives in the set may demonstrate otherwise. This paper focuses on forecasting models based on technical analysis and analyzes how much data snooping bias can occur in tests from restricting the size of forecasting model “universes” or ignoring alternatives used by practitioners and other researchers. A Monte Carlo simulation shows that false discoveries have an average increase of 0.72-2.5 percentage points each time one removes half of the prediction models from the set of relevant alternatives. A complementary empirical investigation suggests that at least 50% of positive findings reported in the literature concerned with trading rule overperformance may be false. Our results motivate several recommendations for applied researchers that would alleviate data snooping bias in some of the more popular statistical tests used in the literature.



中文翻译:

数据侦听偏差在测试多个预测模型的相对性能中的作用

对多个预测模型的相对性能进行的测试对如何定义备选方案非常敏感。针对特定集合评估一个模型可能表明它具有出色的预测能力。但是,更改集合中替代项的数量或类型可能会另外证明。本文着重于基于技术分析的预测模型,并通过限制预测模型“ Universe”的大小或忽略从业人员和其他研究人员使用的替代方法,分析了测试中可能出现多少数据监听偏差。蒙特卡洛模拟显示,每当从一组相关的备选方案中删除一半的预测模型时,错误的发现平均会增加0.72-2.5个百分点。一项补充的经验研究表明,至少有50%的文献中有关交易规则超额表现的积极发现可能是错误的。我们的研究结果为应用研究人员提供了一些建议,这些建议可以减轻文献中使用的一些更流行的统计测试中的数据监听偏见。

更新日期:2021-03-25
down
wechat
bug