当前位置: X-MOL 学术Law and Human Behavior › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
"Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis": Correction to Oberlader et al. (2016).
Law and Human Behavior ( IF 2.4 ) Pub Date : 2019-03-19 , DOI: 10.1037/lhb0000324


Reports an error in "Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis" by Verena A. Oberlader, Christoph Naefgen, Judith Koppehele-Gossel, Laura Quinten, Rainer Banse and Alexander F. Schmidt (Law and Human Behavior, 2016[Aug], Vol 40[4], 440-457). During an update of this meta-analysis it became apparent that one study was erroneously entered twice. The reduced data of k = 55 studies was reanalyzed after excluding the unpublished study by Scheinberger (1993). The corrected overall effect size changed at the second decimal: d = 1.01 (95% CI [0.77, 1.25], Q = 409.73, p < .001, I² = 92.21) and g = 0.98 (95% CI [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%), k = 55, N = 3,399. This small numerical deviation is negligible and does not change the interpretation of the results. Similarly, results for categorial moderators changed only numerically but not in terms of their statistical significance or direction (see revised Table 4). In the original meta-analysis based on k = 56 studies, unpublished studies had a larger effect size than published studies. Based on k = 55 studies, this difference vanished. Results for continuous moderators also changed only numerically: Q-tests with mixed-effects models still revealed that year of publication (Q = 0.06, p = .807, k = 55) as well as gender ratio in the sample (Q = 1.28, p =.259, k = 43) had no statistically significant influence on effect size. In sum, based on the numerically corrected values our implications for practical advices and boundary conditions for the use of content-based techniques in credibility assessment are still valid. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-21973-001.) Within the scope of judicial decisions, approaches to distinguish between true and fabricated statements have been of particular importance since ancient times. Although methods focusing on "prototypical" deceptive behavior (e.g., psychophysiological phenomena, nonverbal cues) have largely been rejected with regard to validity, content-based techniques constitute a promising approach and are well established within the applied forensic context. The basic idea of this approach is that experience-based and nonexperience-based statements differ in their content-related quality. In order to test the validity of the most prominent content-based techniques, criteria-based content analysis (CBCA) and reality monitoring (RM), we conducted a comprehensive meta-analysis on English- and German-language studies. Based on a variety of decision criteria, 55 studies were included revealing an overall effect size of g = 0.98 (95% confidence interval [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%, N = 3,399). There was no significant difference in the effectiveness of CBCA and RM. Additionally, we investigated a number of moderator variables, such as characteristics of participants, statements, and judgment procedures, as well as general study characteristics. Results showed that the application of all CBCA criteria outperformed any incomplete CBCA criteria set. Furthermore, statement classification based on discriminant functions revealed higher discrimination rates than decisions based on sum scores. All results are discussed in terms of their significance for future research (e.g., developing standardized decision rules) and practical application (e.g., user training, applying complete criteria set). (PsycINFO Database Record (c) 2019 APA, all rights reserved).

中文翻译:

“基于内容的技术可区分真实陈述和捏造陈述的有效性:荟萃分析”:对Oberlader等人的更正。(2016)。

报告了Verena A. Oberlader,Christoph Naefgen,Judith Koppehele-Gossel,Laura Quinten,Rainer Banse和Alexander F. Schmidt的“基于内容的技术的有效性,以区分真实和捏造的陈述:荟萃分析”中的错误(法律和人类行为,2016年[Aug],第40卷[4],440-457)。在此荟萃分析的更新过程中,很明显一项研究被错误地输入了两次。在排除Scheinberger(1993)未发表的研究后,重新分析了k = 55研究的减少的数据。校正后的总体效果大小在第二个小数点处更改:d = 1.01(95%CI [0.77,1.25],Q = 409.73,p <.001,I²= 92.21)和g = 0.98(95%CI [0.75,1.22] ,Q = 395.49,p <.001,I 2 = 91.71%),k = 55,N = 3399。这个小的数值偏差可以忽略不计,并且不会改变结果的解释。同样,分类主持人的结果仅在数字上发生了变化,而在统计意义或方向上却没有变化(见修订的表4)。在基于k = 56项研究的原始荟萃分析中,未发表的研究具有比已发表的研究更大的影响范围。根据k = 55项研究,这种差异消失了。连续主持人的结果也仅在数字上发生了变化:采用混合效应模型的Q检验仍显示出出版年份(Q = 0.06,p = .807,k = 55)以及样本中的性别比例(Q = 1.28, p = .259,k = 43)对效果大小没有统计学意义的影响。总而言之,基于数字校正值,我们对于在信誉评估中使用基于内容的技术的实践建议和边界条件的含义仍然有效。本文的在线版本已得到纠正。(原始文章的以下摘要出现在记录2016-21973-001中。)自古以来,在司法裁决的范围内,区分真实陈述和捏造陈述的方法尤为重要。尽管针对有效性的方法主要针对“原型”欺骗行为(例如,心理生理现象,非言语提示)的方法已被拒绝,但基于内容的技术构成了一种有前途的方法,并且在所应用的法证环境中得到了很好的确立。这种方法的基本思想是基于经验的陈述和基于非经验的陈述在与内容相关的质量上有所不同。为了测试最杰出的基于内容的技术的有效性,基于标准的内容分析(CBCA)和现实监控(RM),我们对英语和德语语言研究进行了全面的荟萃分析。根据各种决策标准,纳入了55项研究,揭示总体影响大小为g = 0.98(95%置信区间[0.75,1.22],Q = 395.49,p <.001,I²= 91.71%,N = 3,399) 。CBCA和RM的疗效无明显差异。此外,我们调查了许多主持人变量,例如参与者的特征,陈述和判断程序以及一般学习特征。结果表明,所有CBCA标准的应用均优于任何不完整的CBCA标准集。此外,基于判别函数的语句分类显示出比基于和得分的决策更高的判别率。讨论了所有结果对未来研究(例如,开发标准化决策规则)和实际应用(例如,用户培训,应用完整标准集)的重要性。(PsycINFO数据库记录(c)2019 APA,保留所有权利)。
更新日期:2019-11-01
down
wechat
bug