当前位置: X-MOL 学术Biom. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Measuring intrarater association between correlated ordinal ratings
Biometrical Journal ( IF 1.7 ) Pub Date : 2020-06-11 , DOI: 10.1002/bimj.201900177
Kerrie P Nelson 1 , Thomas J Zhou 1 , Don Edwards 2
Affiliation  

Variability between raters' ordinal scores is commonly observed in imaging tests, leading to uncertainty in the diagnostic process. In breast cancer screening, a radiologist visually interprets mammograms and MRIs, while skin diseases, Alzheimer's disease, and psychiatric conditions are graded based on clinical judgment. Consequently, studies are often conducted in clinical settings to investigate whether a new training tool can improve the interpretive performance of raters. In such studies, a large group of experts each classify a set of patients' test results on two separate occasions, before and after some form of training with the goal of assessing the impact of training on experts' paired ratings. However, due to the correlated nature of the ordinal ratings, few statistical approaches are available to measure association between raters' paired scores. Existing measures are restricted to assessing association at just one time point for a single screening test. We propose here a novel paired kappa to provide a summary measure of association between many raters' paired ordinal assessments of patients' test results before versus after rater training. Intrarater association also provides valuable insight into the consistency of ratings when raters view a patient's test results on two occasions with no intervention undertaken between viewings. In contrast to existing correlated measures, the proposed kappa is a measure that provides an overall evaluation of the association among multiple raters' scores from two time points and is robust to the underlying disease prevalence. We implement our proposed approach in two recent breast-imaging studies and conduct extensive simulation studies to evaluate properties and performance of our summary measure of association.

中文翻译:

测量相关有序评分之间的评分者内关联

在影像学测试中通常会观察到评分者顺序分数之间的差异,从而导致诊断过程中的不确定性。在乳腺癌筛查中,放射科医生通过视觉解释乳房 X 光照片和 MRI,而皮肤病、阿尔茨海默病和精神疾病则根据临床判断进行分级。因此,通常在临床环境中进行研究,以调查新的培训工具是否可以提高评分者的解释能力。在此类研究中,一大群专家在某种形式的培训之前和之后,在两个不同的场合对一组患者的测试结果进行分类,目的是评估培训对专家配对评级的影响。然而,由于序数评级的相关性,很少有统计方法可用于衡量评估者配对分数之间的关联。现有的措施仅限于在单个筛选测试的一个时间点评估关联。我们在此提出了一种新的配对 kappa,以提供许多评估者在评估者培训之前和之后对患者测试结果的配对顺序评估之间关联的汇总度量。当评估者在两次查看患者的测试结果且两次查看之间不进行干预时,评估者内关联还提供了对评级一致性的宝贵洞察。与现有的相关度量相比,提议的 kappa 是一种度量,它提供了对来自两个时间点的多个评分者分数之间关联的总体评估,并且对潜在的疾病患病率具有鲁棒性。
更新日期:2020-06-11
down
wechat
bug