当前位置: X-MOL 学术Int. J. Psychophysiol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using generalizability theory and the ERP Reliability Analysis (ERA) Toolbox for assessing test-retest reliability of ERP scores part 1: Algorithms, framework, and implementation
International Journal of Psychophysiology ( IF 2.5 ) Pub Date : 2021-01-16 , DOI: 10.1016/j.ijpsycho.2021.01.006
Peter E Clayson 1 , Kaylie A Carbine 2 , Scott A Baldwin 3 , Joseph A Olsen 4 , Michael J Larson 5
Affiliation  

The reliability of event-related brain potential (ERP) scores depends on study context and how those scores will be used, and reliability must be routinely evaluated. Many factors can influence ERP score reliability; generalizability (G) theory provides a multifaceted approach to estimating the internal consistency and temporal stability of scores that is well suited for ERPs. G-theory's approach possesses a number of advantages over classical test theory that make it ideal for pinpointing sources of error in scores. The current primer outlines the G-theory approach to estimating internal consistency (coefficients of equivalence) and test-retest reliability (coefficients of stability). This approach is used to evaluate the reliability of ERP measurements. The primer outlines how to estimate reliability coefficients that consider the impact of the number of trials, events, occasions, and groups. The uses of two different G-theory reliability coefficients (i.e., generalizability and dependability) in ERP research are elaborated, and a dataset from the companion manuscript, which examines N2 amplitudes to Go/NoGo stimuli, is used as an example of the application of these coefficients to ERPs. The developed algorithms are implemented in the ERP Reliability Analysis (ERA) Toolbox, which is open-source software designed for estimating score reliability using G theory. The toolbox facilitates the application of G theory in an effort to simplify the study-by-study evaluation of ERP score reliability. The formulas provided in this primer should enable researchers to pinpoint the sources of measurement error in ERP scores from multiple recording sessions and subsequently plan studies that optimize score reliability.



中文翻译:

使用泛化理论和 ERP 可靠性分析 (ERA) 工具箱评估 ERP 分数的重测可靠性第 1 部分:算法、框架和实施

事件相关脑电位 (ERP) 分数的可靠性取决于研究背景以及这些分数的使用方式,并且必须定期评估可靠性。许多因素会影响 ERP 评分的可靠性;通用性 (G) 理论提供了一种多方面的方法来估计非常适合 ERP 的分数的内部一致性和时间稳定性。G-theory 的方法与经典测试理论相比具有许多优势,使其成为查明分数错误来源的理想选择。当前的入门概述了用于估计内部一致性(等效系数)和重测信度(稳定性系数)的 G 理论方法。这种方法用于评估 ERP 测量的可靠性。该入门概述了如何估计考虑了试验次数、事件、场合和组的影响的可靠性系数。详细阐述了两种不同 G 理论可靠性系数(即泛化性和可靠性)在 ERP 研究中的使用,并使用了来自同伴手稿的数据集,该数据集检查了 Go/NoGo 刺激的 N2 幅度,用作应用的示例这些系数到 ERP。开发的算法在 ERP 可靠性分析 (ERA) 工具箱中实现,该工具箱是一种开源软件,旨在使用 G 理论估计分数可靠性。该工具箱有助于 G 理论的应用,以简化对 ERP 评分可靠性的逐项评估。

更新日期:2021-01-18
down
wechat
bug