当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Determining the number of factors using parallel analysis and its recent variants: Comment on Lim and Jahng (2019).
Psychological Methods ( IF 7.6 ) Pub Date : 2021-02-01 , DOI: 10.1037/met0000269
André Achim 1
Affiliation  

Lim and Jahng (2019) recently reported simulations supporting the conclusion that traditional parallel analysis (PA) performs more reliably than do more recent PA versions, particularly in the presence of minor factors acting as population error. With noise factors, correct identification of the number of main factors may, however, mean retaining a noise dimension at the expense of missing a signal dimension. This is documented to occur in nearly 17% of the authors' conditions involving noise factors; these cases did not deserve qualifying as successes. In this context, the reported tendency of other methods to include more dimensions than just the number of main factors (especially with increasing sample size) could mean that they indeed recuperated the full main factor dimensions. Some of these methods actually implement statistical testing of the null hypotheses that, for increasing values of k, the data could have been generated by a suitably determined k-factor model. When this is achieved, the data eigenvalue at rank k + 1 occupies a random rank among the same-rank eigenvalues from surrogate data generated according to the k-factor model. When k is insufficient, the data eigenvalue ranks high among those from the surrogate data. Achim (2017) already established that, for this purpose, iterative re-estimation of the communalities is more efficient than squared multiple regression to produce a suitable k-factor model and that eigenvalue-ranking works better with full than with reduced correlation matrices. This method is termed Next Eigenvalue Sufficiency Test (NEST); code is available with the original article. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

中文翻译:

使用平行分析及其最新变体确定因素数量:评论 Lim 和 Jahng (2019)。

Lim 和 Jahng(2019 年)最近报告的模拟支持以下结论:传统并行分析 (PA) 的性能比最近的 PA 版本更可靠,尤其是在存在作为总体误差的次要因素的情况下。然而,对于噪声因素,正确识别主要因素的数量可能意味着以丢失信号维度为代价保留噪声维度。据记载,在作者涉及噪声因素的条件中,有近 17% 的情况会发生这种情况;这些案件不值得被认定为成功。在这种情况下,其他方法所报告的趋势包括更多维度而不仅仅是主要因素的数量(尤其是随着样本量的增加)可能意味着它们确实恢复了全部主要因素维度。这些方法中的一些实际上实现了零假设的统计测试,即对于增加的 k 值,数据可能已经由适当确定的 k 因子模型生成。当实现这一点时,秩 k + 1 的数据特征值在来自根据 k 因子模型生成的替代数据的相同秩特征值中占据随机秩。当 k 不足时,数据特征值在来自代理数据的特征值中排名靠前。Achim (2017) 已经确定,为此目的,社区的迭代重新估计比平方多元回归更有效,以产生合适的 k 因子模型,并且特征值排序在完整的情况下比减少的相关矩阵更有效。这种方法被称为下一个特征值充分性检验(NEST);代码随原始文章一起提供。
更新日期:2021-02-01
down
wechat
bug