当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes.
Psychological Methods ( IF 10.929 ) Pub Date : 2020-07-16 , DOI: 10.1037/met0000300
Melissa A Rodgers 1 , James E Pustejovsky 1
Affiliation  

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger's regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

中文翻译:

评估荟萃分析方法,以在相关效应量存在的情况下检测选择性报告。

根据结果​​的统计意义进行选择性报告会威胁到荟萃分析结果的有效性。可以使用多种技术来检测选择性报告,出版物偏倚或小研究效应,这些技术通常用于研究合成中。大多数此类技术都是单变量的,因为它们假设每个研究都为荟萃分析贡献了一个独立的效应量估计值。然而,实际上,研究通常会提供多个统计相关的效应量估计值,例如针对共同结果构建的多个量度。有许多方法可以用于元分析依赖效应大小,但是研究选择性报告同时处理效应大小依赖关系的方法还需要进一步研究。使用蒙特卡洛模拟,当忽略或使用广告处理依赖时,我们评估了三种可用的单变量测试,以进行小研究效果或选择性报告,包括修剪和填充测试,Egger回归测试以及来自三参数选择模型(3PSM)的似然比测试。专项技术。我们还研究了Egger回归测试的两个变体,它们结合了鲁棒方差估计(RVE)或多级元分析(MLMA)来处理依赖性。仿真结果表明,对于所有单变量测试,忽略依赖关系都会使I型错误率膨胀。当使用RVE或MLMA采样或处理依赖效应大小时,Egger回归的变体保持I型错误率。3PSM似然比测试不能完全控制I型错误率。除了3PSM,除了在统计学上具有显着效果的强选择外,所有方法检测选择偏差的能力均有限。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。
更新日期:2020-07-16
down
wechat
bug