当前位置: X-MOL 学术Br. J. Math. Stat. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Computing inter-rater reliability and its variance in the presence of high agreement.
British Journal of Mathematical and Statistical Psychology ( IF 2.6 ) Pub Date : 2008-05-17 , DOI: 10.1348/000711006x126600
Kilem Li Gwet 1
Affiliation  

Pi (pi) and kappa (kappa) statistics are widely used in the areas of psychiatry and psychological testing to compute the extent of agreement between raters on nominally scaled data. It is a fact that these coefficients occasionally yield unexpected results in situations known as the paradoxes of kappa. This paper explores the origin of these limitations, and introduces an alternative and more stable agreement coefficient referred to as the AC1 coefficient. Also proposed are new variance estimators for the multiple-rater generalized pi and AC1 statistics, whose validity does not depend upon the hypothesis of independence between raters. This is an improvement over existing alternative variances, which depend on the independence assumption. A Monte-Carlo simulation study demonstrates the validity of these variance estimators for confidence interval construction, and confirms the value of AC1 as an improved alternative to existing inter-rater reliability statistics.

中文翻译:

在高度一致的情况下计算评估者之间的可靠性及其方差。

Pi(pi)和kappa(kappa)统计信息被广泛用于精神病学和心理测验领域,以计算名义上标度数据上评估者之间的一致性程度。事实上,在被称为“ kappa悖论”的情况下,这些系数有时会产生意想不到的结果。本文探讨了这些限制的根源,并介绍了另一种更稳定的协议系数,称为AC1系数。还提出了用于多评估者广义pi和AC1统计的新方差估计器,其有效性不依赖于评估者之间的独立性假设。这是对依赖独立性假设的现有替代方差的改进。
更新日期:2019-11-01
down
wechat
bug