当前位置: X-MOL 学术Psychological Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Determining the number of factors using parallel analysis and its recent variants: Comment on Lim and Jahng (2019).
Psychological Methods ( IF 10.929 ) Pub Date : 2021-02-01 , DOI: 10.1037/met0000269
André Achim 1
Affiliation  

Lim and Jahng (2019) recently reported simulations supporting the conclusion that traditional parallel analysis (PA) performs more reliably than do more recent PA versions, particularly in the presence of minor factors acting as population error. With noise factors, correct identification of the number of main factors may, however, mean retaining a noise dimension at the expense of missing a signal dimension. This is documented to occur in nearly 17% of the authors' conditions involving noise factors; these cases did not deserve qualifying as successes. In this context, the reported tendency of other methods to include more dimensions than just the number of main factors (especially with increasing sample size) could mean that they indeed recuperated the full main factor dimensions. Some of these methods actually implement statistical testing of the null hypotheses that, for increasing values of k, the data could have been generated by a suitably determined k-factor model. When this is achieved, the data eigenvalue at rank k + 1 occupies a random rank among the same-rank eigenvalues from surrogate data generated according to the k-factor model. When k is insufficient, the data eigenvalue ranks high among those from the surrogate data. Achim (2017) already established that, for this purpose, iterative re-estimation of the communalities is more efficient than squared multiple regression to produce a suitable k-factor model and that eigenvalue-ranking works better with full than with reduced correlation matrices. This method is termed Next Eigenvalue Sufficiency Test (NEST); code is available with the original article. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

中文翻译:

使用平行分析及其最近的变体确定因素的数量:评论Lim和Jahng(2019)。

Lim和Jahng(2019)最近报告了模拟,支持以下结论:传统并行分析(PA)的性能比最近的PA版本更可靠,尤其是在存在作为总体误差的次要因素的情况下。但是,对于噪声因素,正确识别主要因素的数量可能意味着保留噪声维度,但会丢失信号维度。据记录,这发生在涉及噪声因素的作者病情中将近17%。这些案例不符合取得成功的资格。在这种情况下,据报道,其他方法趋向于包括更多的维度而不仅仅是主要因素的数量(尤其是随着样本数量的增加)可能意味着它们确实使全部主要因素的维度恢复了。这些方法中的某些实际上对零假设进行了统计检验,对于零的k值,可能已经通过适当确定的k因子模型生成了数据。当实现这一点时,等级k +1的数据特征值在根据k因子模型生成的替代数据的相同等级特征值中占据一个随机等级。当k不足时,数据特征值在来自替代数据的那些特征值中排名较高。Achim(2017)已经确定,为此目的,对社区进行迭代重新估计要比平方多元回归更有效,以生成合适的k因子模型,并且特征值排序在完全使用时比在减少后使用相关矩阵时效果更好。这种方法被称为下一特征值充分性测试(NEST)。原始文章中提供了相应的代码。
更新日期:2021-02-01
down
wechat
bug