样式: 排序: IF: - GO 导出 标记为已读
-
Examining the Dynamic of Clustering Effects in Multilevel Designs: A Latent Variable Method Application Educ. Psychol. Meas. (IF 2.7) Pub Date : 2024-02-21 Tenko Raykov, Ahmed Haddadi, Christine DiStefano, Mohammed Alqabbaa
This note is concerned with the study of temporal development in several indices reflecting clustering effects in multilevel designs that are frequently utilized in educational and behavioral research. A latent variable method-based approach is outlined, which can be used to point and interval estimate the growth or decline in important functions of level-specific variances in two-level and three-level
-
Rotation Local Solutions in Multidimensional Item Response Theory Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2024-01-23 Hoang V. Nguyen, Niels G. Waller
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter sizes, (b) number of indicators per factor (trait), (c)
-
Detecting Careless Responding in Multidimensional Forced-Choice Questionnaires Educ. Psychol. Meas. (IF 2.7) Pub Date : 2024-01-12 Rebekka Kupffer, Susanne Frick, Eunike Wetzel
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and additionally develop several new indices that are unique to
-
Two-Method Measurement Planned Missing Data With Purposefully Selected Samples Educ. Psychol. Meas. (IF 2.7) Pub Date : 2024-01-05 Menglin Xu, Jessica A. R. Logan
Research designs that include planned missing data are gaining popularity in applied education research. These methods have traditionally relied on introducing missingness into data collections using the missing completely at random (MCAR) mechanism. This study assesses whether planned missingness can also be implemented when data are instead designed to be purposefully missing based on student performance
-
The Trade-Off Between Factor Score Determinacy and the Preservation of Inter-Factor Correlations Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-29 André Beauducel, Norbert Hilger, Tobias Kuhl
Regression factor score predictors have the maximum factor score determinacy, that is, the maximum correlation with the corresponding factor, but they do not have the same inter-correlations as the...
-
Identifying Disengaged Responding in Multiple-Choice Items: Extending a Latent Class Item Response Model With Novel Process Data Indicators Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-29 Jana Welling, Timo Gnambs, Claus H. Carstensen
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing...
-
Dominance Analysis for Latent Variable Models: A Comparison of Methods With Categorical Indicators and Misspecified Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-28 W. Holmes Finch
Dominance analysis (DA) is a very useful tool for ordering independent variables in a regression model based on their relative importance in explaining variance in the dependent variable. This appr...
-
A Comparison of Response Time Threshold Scoring Procedures in Mitigating Bias From Rapid Guessing Behavior Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-26 Joseph A. Rios, Jiayi Deng
Rapid guessing (RG) is a form of non-effortful responding that is characterized by short response latencies. This construct-irrelevant behavior has been shown in previous research to bias inference...
-
Modeling Misspecification as a Parameter in Bayesian Structural Equation Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-24 James Ohisei Uanhoro
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of mis...
-
A Note on Comparing the Bifactor and Second-Order Factor Models: Is the Bayesian Information Criterion a Routinely Dependable Index for Model Selection? Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-21 Tenko Raykov, Christine DiStefano, Lisa Calvocoressi
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order ...
-
Artificial Neural Networks for Short-Form Development of Psychometric Tests: A Study on Synthetic Populations Using Autoencoders Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-15 Monica Casella, Pasquale Dolce, Michela Ponticorvo, Nicola Milano, Davide Marocco
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used fo...
-
Are the Steps on Likert Scales Equidistant? Responses on Visual Analog Scales Allow Estimating Their Distances Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-04 Miguel A. García-Pérez
A recurring question regarding Likert items is whether the discrete steps that this response format allows represent constant increments along the underlying continuum. This question appears unsolv...
-
Evaluating Model Fit of Measurement Models in Confirmatory Factor Analysis Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-04-02 David Goretzko, Karik Siemund, Philipp Sterner
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, altho...
-
Model Specification Searches in Structural Equation Modeling Using Bee Swarm Optimization Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-03-29 Ulrich Schroeders, Florian Scharf, Gabriel Olaru
Metaheuristics are optimization algorithms that efficiently solve a variety of complex combinatorial problems. In psychological research, metaheuristics have been applied in short-scale constructio...
-
Evaluating Close Fit in Ordinal Factor Analysis Models With Multiply Imputed Data Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-03-27 Dexin Shi, Bo Zhang, Ren Liu, Zhehan Jiang
Multiple imputation (MI) is one of the recommended techniques for handling missing data in ordinal factor analysis models. However, methods for computing MI-based fit indices under ordinal factor a...
-
The Impact of Measurement Model Misspecification on Coefficient Omega Estimates of Composite Reliability Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-02-18 Stephanie M. Bell, R. Philip Chalmers, David B. Flora
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measure...
-
Correcting for Extreme Response Style: Model Choice Matters Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-02-17 Martijn Schoenmakers, Jesper Tijmstra, Jeroen Vermunt, Maria Bolsinova
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questi...
-
Procedures for Analyzing Multidimensional Mixture Data Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-02-16 Hsu-Lin Su, Po-Hsi Chen
The multidimensional mixture data structure exists in many test (or inventory) conditions. Heterogeneity also relatively exists in populations. Still, some researchers are interested in deciding to...
-
A Note on Statistical Hypothesis Testing: Probabilifying Modus Tollens Invalidates Its Force? Not True! Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-01-13 Keith F. Widaman
The import or force of the result of a statistical test has long been portrayed as consistent with deductive reasoning. The simplest form of deductive argument has a first premise with conditional ...
-
What Affects the Quality of Score Transformations? Potential Issues in True-Score Equating Using the Partial Credit Model Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-01-13 Carolina Fellinghauer, Rudolf Debelak, Carolin Strobl
This simulation study investigated to what extent departures from construct similarity as well as differences in the difficulty and targeting of scales impact the score transformation when scales a...
-
Equating Oral Reading Fluency Scores: A Model-Based Approach Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-01-05 Yusuf Kara, Akihito Kamata, Xin Qiao, Cornelis J. Potgieter, Joseph F. T. Nese
Words read correctly per minute (WCPM) is the reporting score metric in oral reading fluency (ORF) assessments, which is popularly utilized as part of curriculum-based measurements to screen at-ris...
-
Functional Approaches for Modeling Unfolding Data Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-01-05 George Engelhard
The purpose of this study is to introduce a functional approach for modeling unfolding response data. Functional data analysis (FDA) has been used for examining cumulative item response data, but a...
-
Why Do Regular and Reversed Items Load on Separate Factors? Response Difficulty vs. Item Extremity Educ. Psychol. Meas. (IF 2.7) Pub Date : 2023-01-02 Chester Chun Seng Kam
When constructing measurement scales, regular and reversed items are often used (e.g., “I am satisfied with my job”/“I am not satisfied with my job”). Some methodologists recommend excluding revers...
-
On Modeling Missing Data in Structural Investigations Based on Tetrachoric Correlations With Free and Fixed Factor Loadings Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-12-20 Karl Schweizer, Andreas Gold, Dorothea Krampen
In modeling missing data, the missing data latent variable of the confirmatory factor model accounts for systematic variation associated with missing data so that replacement of what is missing is ...
-
An Explanatory Multidimensional Random Item Effects Rating Scale Model Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-12-13 Sijia Huang, Jinwen (Jevan) Luo, Li Cai
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has ...
-
Evaluating the Effects of Missing Data Handling Methods on Scale Linking Accuracy Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-12-09 Tong Wu, Stella Y. Kim, Carl Westine
For large-scale assessments, data are often collected with missing responses. Despite the wide use of item response theory (IRT) in many testing programs, however, the existing literature offers li...
-
Detecting Preknowledge Cheating via Innovative Measures: A Mixture Hierarchical Model for Jointly Modeling Item Responses, Response Times, and Visual Fixation Counts Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-11-16 Kaiwen Man, Jeffrey R. Harring
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response...
-
Position of Correct Option and Distractors Impacts Responses to Multiple-Choice Items: Evidence From a National Test Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-11-12 Séverin Lions, Pablo Dartnell, Gabriela Toledo, María Inés Godoy, Nora Córdova, Daniela Jiménez, Julie Lemarié
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps...
-
Detecting Cheating in Large-Scale Assessment: The Transfer of Detectors to New Tests Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-11-04 Jochen Ranger, Nico Schmidt, Anett Wolgast
Recent approaches to the detection of cheaters in tests employ detectors from the field of machine learning. Detectors based on supervised learning algorithms achieve high accuracy but require labe...
-
Equidistant Response Options on Likert-Type Instruments: Testing the Interval Scaling Assumption Using Mplus Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-10-27 Georgios Sideridis, Ioannis Tsaousis, Hanan Ghamdi
The purpose of the present study was to provide the means to evaluate the “interval-scaling” assumption that governs the use of parametric statistics and continuous data estimators in self-report i...
-
A Comparison of Person-Fit Indices to Detect Social Desirability Bias Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-10-18 Sanaz Nazari, Walter L. Leite, A. Corinne Huggins-Manley
Social desirability bias (SDB) has been a major concern in educational and psychological assessments when measuring latent variables because it has the potential to introduce measurement error and ...
-
Generalized Mantel–Haenszel Estimators for Simultaneous Differential Item Functioning Tests Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-10-15 Ivy Liu, Thomas Suesse, Samuel Harvey, Peter Yongqi Gu, Daniel Fernández, John Randal
The Mantel–Haenszel estimator is one of the most popular techniques for measuring differential item functioning (DIF). A generalization of this estimator is applied to the context of DIF to compare...
-
A Bayesian General Model to Account for Individual Differences in Operation-Specific Learning Within a Test Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-09-19 José H. Lozano, Javier Revuelta
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing t...
-
The NEAT Equating Via Chaining Random Forests in the Context of Small Sample Sizes: A Machine-Learning Method Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-09-04 Zhehan Jiang, Yuting Han, Lingling Xu, Dexin Shi, Ren Liu, Jinying Ouyang, Fen Cai
The part of responses that is absent in the nonequivalent groups with anchor test (NEAT) design can be managed to a planned missing scenario. In the context of small sample sizes, we present a mach...
-
Relative Robustness of CDMs and (M)IRT in Measuring Growth in Latent Skills Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-08-18 Qi (Helen) Huang, Daniel M. Bolt
Previous studies have demonstrated evidence of latent skill continuity even in tests intentionally designed for measurement of binary skills. In addition, the assumption of binary skills when conti...
-
Are Speeded Tests Unfair? Modeling the Impact of Time Limits on the Gender Gap in Mathematics Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-08-16 Andrea H. Stoevenbelt, Jelte M. Wicherts, Paulette C. Flore, Lorraine A. T. Phillips, Jakob Pietschnig, Bruno Verschuere, Martin Voracek, Inga Schwabe
When cognitive and educational tests are administered under time limits, tests may become speeded and this may affect the reliability and validity of the resulting test scores. Prior research has s...
-
Exploration of the Stacking Ensemble Machine Learning Algorithm for Cheating Detection in Large-Scale Assessment Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-08-13 Todd Zhou, Hong Jiao
Cheating detection in large-scale assessment received considerable attention in the extant literature. However, none of the previous studies in this line of research investigated the stacking ensem...
-
Detecting Rating Scale Malfunctioning With the Partial Credit Model and Generalized Partial Credit Model Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-08-12 Stefanie A. Wind
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) ...
-
Comparing the Psychometric Properties of a Scale Across Three Likert and Three Alternative Formats: An Application to the Rosenberg Self-Esteem Scale Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-08-09 Xijuan Zhang, Linnan Zhou, Victoria Savalei
Zhang and Savalei proposed an alternative scale format to the Likert format, called the Expanded format. In this format, response options are presented in complete sentences, which can reduce acqui...
-
Supervised Classes, Unsupervised Mixing Proportions: Detection of Bots in a Likert-Type Questionnaire Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-30 Michael John Ilagan, Carl F. Falk
Administering Likert-type questionnaires to online samples risks contamination of the data by malicious computer-generated random responses, also known as bots. Although nonresponsivity indices (NR...
-
Assessing Dimensionality of IRT Models Using Traditional and Revised Parallel Analyses Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-21 Wenjing Guo, Youn-Jeng Choi
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor ana...
-
The Impact and Detection of Uniform Differential Item Functioning for Continuous Item Response Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-21 W. Holmes Finch
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters...
-
On the Importance of Coefficient Alpha for Measurement Research: Loading Equality Is Not Necessary for Alpha’s Utility as a Scale Reliability Index Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-20 Tenko Raykov, James C. Anthony, Natalja Menold
The population relationship between coefficient alpha and scale reliability is studied in the widely used setting of unidimensional multicomponent measuring instruments. It is demonstrated that for any set of component loadings on the common factor, regardless of the extent of their inequality, the discrepancy between alpha and reliability can be arbitrarily small in any considered population and hence
-
Changes in the Speed–Ability Relation Through Different Treatments of Rapid Guessing Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-11 Tobias Deribo, Frank Goldhammer, Ulf Kroehne
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a response given under rapid-guessing behavior does bias
-
A Small Sample Correction for Factor Score Regression Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-02 Jasper Bogaert, Wen Wei Loh, Yves Rosseel
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error in the factor scores. The method of Croon (MOC) is
-
A Robust Method for Detecting Item Misfit in Large-Scale Assessments Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-07-02 Matthias von Davier, Ummugul Bezirhan
Viable methods for the identification of item misfit or Differential Item Functioning (DIF) are central to scale construction and sound measurement. Many approaches rely on the derivation of a limiting distribution under the assumption that a certain model fits the data perfectly. Typical DIF assumptions such as the monotonicity and population independence of item functions are present even in classical
-
Fixed Effects or Mixed Effects Classifiers? Evidence From Simulated and Archival Data Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-06-30 Anthony A. Mangino, Jocelyn H. Bolin, W. Holmes Finch
This study seeks to compare fixed and mixed effects models for the purposes of predictive classification in the presence of multilevel data. The first part of the study utilizes a Monte Carlo simulation to compare fixed and mixed effects logistic regression and random forests. An applied examination of the prediction of student retention in the public-use U.S. PISA data set was considered to verify
-
Scoring Graphical Responses in TIMSS 2019 Using Artificial Neural Networks Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-05-23 Matthias von Davier, Lillian Tyack, Lale Khorramdel
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our results show that convolutional neural networks (CNNs) outperform
-
The Impact of Sample Size and Various Other Factors on Estimation of Dichotomous Mixture IRT Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-05-19 Sedat Sen, Allan S. Cohen
The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test length (10, 30, and 50), number of classes (2 and 3),
-
Awareness Is Bliss: How Acquiescence Affects Exploratory Factor Analysis Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-05-16 E. Damiano D’Urso, Jesper Tijmstra, Jeroen K. Vermunt, Kim De Roover
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals’ latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these psychometric properties, where the number of measured constructs
-
Investigating Confidence Intervals of Item Parameters When Some Item Parameters Take Priors in the 2PL and 3PL Models Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-05-16 Insu Paek, Zhongtian Lin, Robert Philip Chalmers
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori (MMAP) and posterior standard error (PSE) are estimated
-
Evaluating the Quality of Classification in Mixture Model Simulations Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-04-29 Yoona Jang, Sehee Hong
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations, it was determined that models without a covariate better
-
On Bank Assembly and Block Selection in Multidimensional Forced-Choice Adaptive Assessments Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-04-28 Rodrigo S. Kreitchmann, Miguel A. Sorrel, Francisco J. Abad
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of nonipsative scores from FC responses. However, while some authors
-
Summary Intervals for Model-Based Classification Accuracy and Consistency Indices Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-04-28 Oscar Gonzalez
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the linear factor model have been recently proposed, but
-
Performance of Coefficient Alpha and Its Alternatives: Effects of Different Types of Non-Normality Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-04-11 Leifeng Xiao, Kit-Tai Hau
We examined the performance of coefficient alpha and its potential competitors (ordinal alpha, omega total, Revelle’s omega total [omega RT], omega hierarchical [omega h], greatest lower bound [GLB], and coefficient H) with continuous and discrete data having different types of non-normality. Results showed the estimation bias was acceptable for continuous data with varying degrees of non-normality
-
Croon’s Bias-Corrected Estimation for Multilevel Structural Equation Models with Non-Normal Indicators and Model Misspecifications Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-03-11 Kyle Cox, Benjamin Kelcey
Multilevel structural equation models (MSEMs) are well suited for educational research because they accommodate complex systems involving latent variables in multilevel settings. Estimation using Croon’s bias-corrected factor score (BCFS) path estimation has recently been extended to MSEMs and demonstrated promise with limited sample sizes. This makes it well suited for planned educational research
-
Range Restriction Affects Factor Analysis: Normality, Estimation, Fit, Loadings, and Reliability Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-03-10 Alicia Franco-Martínez, Jesús M. Alvarado, Miguel A. Sorrel
A sample suffers range restriction (RR) when its variance is reduced comparing with its population variance and, in turn, it fails representing such population. If the RR occurs over the latent factor, not directly over the observed variable, the researcher deals with an indirect RR, common when using convenience samples. This work explores how this problem affects different outputs of the factor analysis:
-
Multidimensional Forced-Choice CAT With Dominance Items: An Empirical Comparison With Optimal Static Testing Under Different Desirability Matching Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-03-07 Yin Lin, Anna Brown, Paul Williams
Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by simulations and lacking in empirical deployment. This empirical
-
Resolving Dimensionality in a Child Assessment Tool: An Application of the Multilevel Bifactor Model Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-03-07 Hope O. Akaeze, Frank R. Lawrence, Jamie Heng-Chieh Wu
Multidimensionality and hierarchical data structure are common in assessment data. These design features, if not accounted for, can threaten the validity of the results and inferences generated from factor analysis, a method frequently employed to assess test dimensionality. In this article, we describe and demonstrate the application of the multilevel bifactor model to address these features in examining
-
Assessing Ability Recovery of the Sequential IRT Model With Unstructured Multiple-Attempt Data Educ. Psychol. Meas. (IF 2.7) Pub Date : 2022-03-02 Ziying Li, A. Corinne Huggins-Manley, Walter L. Leite, M. David Miller, Eric A. Wright
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of data set as a whole in the practice of educational measurement