当前位置: X-MOL 学术Curr. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Strategies to detect invalid performance in cognitive testing: An updated and extended meta-analysis
Current Psychology ( IF 2.387 ) Pub Date : 2021-04-01 , DOI: 10.1007/s12144-021-01659-x
Iulia Crişan , Laurenţiu-Paul Maricuţoiu , Florin-Alin Sava

This review updates previous meta-analytical findings on validity indicators and provides new evidence on moderators of invalid performance, by investigating differences between noncredible and credible performances of clinical and non-clinical participants. Data from 133 studies (50 from previous meta-analyses and 83 new articles) were extracted and analyzed regarding types of research design, coaching, stimuli, and detection strategies. Overall effects were largest for experimental studies with non-clinical simulators vs community controls (Mean d = 1.648, 95% CI = 1.46–1.835, k = 41), and clinical simulators vs clinical controls (Mean d = 1.728, 95% CI = 1.224–2.232, k = 6), followed by known-groups comparisons (Mean d = 1.06, 95% CI = .955–1.166, k = 50) and experimental studies with community simulators vs patients (Mean d = .877, 95% CI = .751–1.004, k = 53). Similar to previous findings, symptom-coaching proved more effective than test-coaching in reducing differences between non-clinical simulators and clinical patients. In addition to the previous reviews, the analysis of stimuli material demonstrated the largest effects and resistance to coaching for tasks using numbers and letters & symbols. The analysis of detection strategies across types of contrasts, instruments, and coaching yielded the largest effects for Recognition. Effects were moderate for Magnitude of error, Performance curve, and Recall, lower and more variable for Reaction time, Floor effect, and Consistency, with stand-alone indicators generally proving larger differences than embedded indices. Methodological and practical implications are discussed related to testing validity indicators in research and associating them in assessment.



中文翻译:

在认知测试中检测无效表现的策略:更新和扩展的荟萃分析

这项审查通过调查临床和非临床参与者的非可信和可信表现之间的差异,更新了以前关于有效性指标的荟萃分析发现,并提供了有关无效表现调节剂的新证据。提取并分析了133项研究的数据(以前的荟萃分析中的50项和83篇新文章),并就研究设计的类型,指导,刺激和检测策略进行了分析。对于非临床模拟器与社区对照的实验研究,总体效果最大(平均值d = 1.648,95%CI = 1.46–1.835,k = 41),而临床模拟器与临床对照的平均值(平均值d = 1.728,95%CI = 1.224–2.232,k = 6),然后进行已知人群比较(平均值d = 1.06,95%CI = .955–1.166,k = 50)以及使用社区模拟器对患者进行的实验研究(平均值d = .877,95) %CI =。751–1.004,k = 53)。与以前的发现相似,症状辅导在减少非临床模拟器与临床患者之间的差异方面比考试辅导更有效。除先前的评论外,对刺激材料的分析还显示出最大的效果,以及对使用数字,字母和符号进行任务指导的抵抗力。跨对比类型,手段和指导类型的检测策略分析对识别产生了最大的影响。误差的大小,性能曲线和召回率的影响中等,反应时间,地板效应和一致性的波动性较低,更多的是可变的,独立指标通常比嵌入的指标具有更大的差异。

更新日期:2021-04-02
down
wechat
bug