-
The receiver operating characteristic area under the curve (or mean ridit) as an effect size. Psychological Methods (IF 10.929) Pub Date : 2023-07-13 Michael Smithson
Several authors have recommended adopting the receiver operator characteristic (ROC) area under the curve (AUC) or mean ridit as an effect size, arguing that it measures an important and interpretable type of effect that conventional effect-size measures do not. It is base-rate insensitive, robust to outliers, and invariant under order-preserving transformations. However, applications have been limited
-
A general Monte Carlo method for sample size analysis in the context of network models. Psychological Methods (IF 10.929) Pub Date : 2023-07-10 Mihai A Constantin,Noémi K Schuurman,Jeroen K Vermunt
We introduce a general method for sample size computations in the context of cross-sectional network models. The method takes the form of an automated Monte Carlo algorithm, designed to find an optimal sample size while iteratively concentrating the computations on the sample sizes that seem most relevant. The method requires three inputs: (1) a hypothesized network structure or desired characteristics
-
Consequences of sampling frequency on the estimated dynamics of AR processes using continuous-time models. Psychological Methods (IF 10.929) Pub Date : 2023-07-10 Rohit Batra,Simran K Johal,Meng Chen,Emilio Ferrer
Continuous-time (CT) models are a flexible approach for modeling longitudinal data of psychological constructs. When using CT models, a researcher can assume one underlying continuous function for the phenomenon of interest. In principle, these models overcome some limitations of discrete-time (DT) models and allow researchers to compare findings across measures collected using different time intervals
-
Dimensionality assessment in bifactor structures with multiple general factors: A network psychometrics approach. Psychological Methods (IF 10.929) Pub Date : 2023-07-06 Marcos Jiménez,Francisco J Abad,Eduardo Garcia-Garzon,Hudson Golino,Alexander P Christensen,Luis Eduardo Garrido
The accuracy of factor retention methods for structures with one or more general factors, like the ones typically encountered in fields like intelligence, personality, and psychopathology, has often been overlooked in dimensionality research. To address this issue, we compared the performance of several factor retention methods in this context, including a network psychometrics approach developed in
-
A novel approach to estimate moderated treatment effects and moderated mediated effects with continuous moderators. Psychological Methods (IF 10.929) Pub Date : 2023-06-12 Matthew J Valente,Judith J M Rijnhart,Oscar Gonzalez
Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating
-
A comprehensive model framework for between-individual differences in longitudinal data. Psychological Methods (IF 10.929) Pub Date : 2023-06-12 Anja F Ernst,Casper J Albers,Marieke E Timmerman
Across different fields of research, the similarities and differences between various longitudinal models are not always eminently clear due to differences in data structure, application area, and terminology. Here we propose a comprehensive model framework that will allow simple comparisons between longitudinal models, to ease their empirical application and interpretation. At the within-individual
-
Bayesian penalty methods for evaluating measurement invariance in moderated nonlinear factor analysis. Psychological Methods (IF 10.929) Pub Date : 2023-06-08 Holger Brandt,Siyuan Marco Chen,Daniel J Bauer
Measurement invariance (MI) is one of the main psychometric requirements for analyses that focus on potentially heterogeneous populations. MI allows researchers to compare latent factor scores across persons from different subgroups, whereas if a measure is not invariant across all items and persons then such comparisons may be misleading. If full MI does not hold further testing may identify problematic
-
Is exploratory factor analysis always to be preferred? A systematic comparison of factor analytic techniques throughout the confirmatory-exploratory continuum. Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Pablo Nájera,Francisco J Abad,Miguel A Sorrel
The number of available factor analytic techniques has been increasing in the last decades. However, the lack of clear guidelines and exhaustive comparison studies between the techniques might hinder that these valuable methodological advances make their way to applied research. The present paper evaluates the performance of confirmatory factor analysis (CFA), CFA with sequential model modification
-
A tool to simulate and visualize dyadic interaction dynamics. Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Sophie W Berkhout,Noémi K Schuurman,Ellen L Hamaker
ynamic models are becoming increasingly popular to study the dynamic processes of dyadic interactions. In this article, we present a Dyadic Interaction Dynamics (DID) Shiny app which provides simulations and visualizations of data from several models that have been proposed for the analysis of dyadic data. We propose data generation as a tool to inspire and guide theory development and elaborate on
-
Interpretable machine learning for psychological research: Opportunities and pitfalls. Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Mirka Henninger,Rudolf Debelak,Yannick Rothacher,Carolin Strobl
In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most
-
A true score imputation method to account for psychometric measurement error. Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Maxwell Mansolf
Scores on self-report questionnaires are often used in statistical models without accounting for measurement error, leading to bias in estimates related to those variables. While measurement error corrections exist, their broad application is limited by their simplicity (e.g., Spearman's correction for attenuation), which complicates their inclusion in specialized analyses, or complexity (e.g., latent
-
A factored regression model for composite scores with item-level missing data. Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Egamaria Alacam,Craig K Enders,Han Du,Brian T Keller
Composite scores are an exceptionally important psychometric tool for behavioral science research applications. A prototypical example occurs with self-report data, where researchers routinely use questionnaires with multiple items that tap into different features of a target construct. Item-level missing data are endemic to composite score applications. Many studies have investigated this issue, and
-
What are the mathematical bounds for coefficient α? Psychological Methods (IF 10.929) Pub Date : 2023-05-25 Niels Waller,William Revelle
Coefficient α, although ubiquitous in the research literature, is frequently criticized for being a poor estimate of test reliability. In this note, we consider the range of α and prove that it has no lower bound (i.e., α ∈ ( - ∞, 1]). While outlining our proofs, we present algorithms for generating data sets that will yield any fixed value of α in its range. We also prove that for some data sets-even
-
A Bayesian classifier for fractal characterization of short behavioral series. Psychological Methods (IF 10.929) Pub Date : 2023-05-01 Alessandro Solfo,Cees van Leeuwen
Serial tasks in behavioral research often lead to correlated responses, invalidating the application of generalized linear models and leaving the analysis of serial correlations as the only viable option. We present a Bayesian analysis method suitable for classifying even relatively short behavioral series according to their correlation structure. Our classifier consists of three phases. Phase 1 distinguishes
-
The text-package: An R-package for analyzing and visualizing human language using natural language processing and transformers. Psychological Methods (IF 10.929) Pub Date : 2023-05-01 Oscar Kjell,Salvatore Giorgi,H Andrew Schwartz
The language that individuals use for expressing themselves contains rich psychological information. Recent significant advances in Natural Language Processing (NLP) and Deep Learning (DL), namely transformers, have resulted in large performance gains in tasks related to understanding natural language. However, these state-of-the-art methods have not yet been made easily accessible for psychology researchers
-
A default Bayes factor for testing null hypotheses about the fixed effects of linear two-level models. Psychological Methods (IF 10.929) Pub Date : 2023-04-27 Nikola Sekulovski,Herbert Hoijtink
Testing null hypotheses of the form "β = 0," by the use of various Null Hypothesis Significance Tests (rendering a dichotomous reject/not reject decision), is considered standard practice when evaluating the individual parameters of statistical models. Bayes factors for testing these (and other) hypotheses allow users to quantify the evidence in the data that is in favor of a hypothesis. Unfortunately
-
Data-driven covariate selection for confounding adjustment by focusing on the stability of the effect estimator. Psychological Methods (IF 10.929) Pub Date : 2023-04-27 Wen Wei Loh,Dongning Ren
Valid inference of cause-and-effect relations in observational studies necessitates adjusting for common causes of the focal predictor (i.e., treatment) and the outcome. When such common causes, henceforth termed confounders, remain unadjusted for, they generate spurious correlations that lead to biased causal effect estimates. But routine adjustment for all available covariates, when only a subset
-
Pooling methods for likelihood ratio tests in multiply imputed data sets. Psychological Methods (IF 10.929) Pub Date : 2023-04-27 Simon Grund,Oliver Lüdtke,Alexander Robitzsch
Likelihood ratio tests (LRTs) are a popular tool for comparing statistical models. However, missing data are also common in empirical research, and multiple imputation (MI) is often used to deal with them. In multiply imputed data, there are multiple options for conducting LRTs, and new methods are still being proposed. In this article, we compare all available methods in multiple simulations covering
-
A posterior expected value approach to decision-making in the multiphase optimization strategy for intervention science. Psychological Methods (IF 10.929) Pub Date : 2023-04-13 Jillian C Strayhorn,Linda M Collins,David J Vanness
In current practice, intervention scientists applying the multiphase optimization strategy (MOST) with a 2k factorial optimization trial use a component screening approach (CSA) to select intervention components for inclusion in an optimized intervention. In this approach, scientists review all estimated main effects and interactions to identify the important ones based on a fixed threshold, and then
-
Causal inference for treatment effects in partially nested designs. Psychological Methods (IF 10.929) Pub Date : 2023-04-13 Xiao Liu,Fang Liu,Laura Miller-Graff,Kathryn H Howell,Lijuan Wang
artially nested designs (PNDs) are common in intervention studies in psychology and other social sciences. With this design, participants are assigned to treatment and control groups on an individual basis, but clustering occurs in some but not all groups (e.g., the treatment group). In recent years, there has been substantial development of methods for analyzing data from PNDs. However, little research
-
How do unobserved confounding mediators and measurement error impact estimated mediation effects and corresponding statistical inferences? Introducing the R package ConMed for sensitivity analysis. Psychological Methods (IF 10.929) Pub Date : 2023-04-01 Qinyun Lin,Amy K Nuttall,Qian Zhang,Kenneth A Frank
Empirical studies often demonstrate multiple causal mechanisms potentially involving simultaneous or causally related mediators. However, researchers often use simple mediation models to understand the processes because they do not or cannot measure other theoretically relevant mediators. In such cases, another potentially relevant but unobserved mediator potentially confounds the observed mediator
-
Why the use of segmented regression analysis to explore change in relations between variables is problematic: A simulation study. Psychological Methods (IF 10.929) Pub Date : 2023-03-27 Moritz Breit,Julian Preuß,Vsevolod Scherrer,Franzis Preckel
Relations between variables can take different forms like linearity, piecewise linearity, or nonlinearity. Segmented regression analyses (SRA) are specialized statistical methods that detect breaks in the relationship between variables. They are commonly used in the social sciences for exploratory analyses. However, many relations may not be best described by a breakpoint and a resulting piecewise
-
Factorization of person response profiles to identify summative profiles carrying central response patterns. Psychological Methods (IF 10.929) Pub Date : 2023-03-27 Se-Kang Kim
A data matrix, where rows represent persons and columns represent measured subtests, can be viewed as a stack of person profiles, as rows are actually person profiles of observed responses on column subtests. Profile analysis seeks to identify a small number of latent profiles from a large number of person response profiles to identify central response patterns, which are useful for assessing the strengths
-
Questionable research practices and cumulative science: The consequences of selective reporting on effect size bias and heterogeneity. Psychological Methods (IF 10.929) Pub Date : 2023-03-23 Samantha F Anderson,Xinran Liu
Despite increased attention to open science and transparency, questionable research practices (QRPs) remain common, and studies published using QRPs will remain a part of the published record for some time. A particularly common type of QRP involves multiple testing, and in some forms of this, researchers report only a selection of the tests conducted. Methodological investigations of multiple testing
-
True and error analysis instead of test of correlated proportions: Can we save lexicographic semiorder models with error theory? Psychological Methods (IF 10.929) Pub Date : 2023-03-23 Michael H Birnbaum
This article criticizes conclusions drawn from the standard test of correlated proportions when the dependent measure contains error. It presents a tutorial on a new method of analysis based on the true and error (TE) theory. This method allows the investigator to separate measurement of error from substantive conclusions about the effects of the independent variable, but it requires replicated measures
-
Cognitive and cultural models in psychological science: A tutorial on modeling free-list data as a dependent variable in Bayesian regression. Psychological Methods (IF 10.929) Pub Date : 2023-03-23 Theiss Bendixen,Benjamin Grant Purzycki
Assessing relationships between culture and cognition is central to psychological science. To this end, free-listing is a useful methodological instrument. To facilitate its wider use, we here present the free-list method along with some of its many applications and offer a tutorial on how to prepare and statistically model free-list data as a dependent variable in Bayesian regression using openly
-
Missing data: An update on the state of the art. Psychological Methods (IF 10.929) Pub Date : 2023-03-16 Craig K Enders
The year 2022 is the 20th anniversary of Joseph Schafer and John Graham's paper titled "Missing data: Our view of the state of the art," currently the most highly cited paper in the history of Psychological Methods. Much has changed since 2002, as missing data methodologies have continually evolved and improved; the range of applications that are possible with modern missing data techniques has increased
-
An introductory guide for conducting psychological research with big data. Psychological Methods (IF 10.929) Pub Date : 2023-03-13 Michela Vezzoli,Cristina Zogmaister
Big Data can bring enormous benefits to psychology. However, many psychological researchers show skepticism in undertaking Big Data research. Psychologists often do not take Big Data into consideration while developing their research projects because they have difficulties imagining how Big Data could help in their specific field of research, imagining themselves as "Big Data scientists," or for lack
-
Comparing random effects models, ordinary least squares, or fixed effects with cluster robust standard errors for cross-classified data. Psychological Methods (IF 10.929) Pub Date : 2023-03-09 Young Ri Lee,James E Pustejovsky
Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in psychology, education research, and other fields. However, when the focus of a study is on the regression coefficients at Level 1 rather than on the random effects, ordinary least squares regression with cluster robust variance estimators (OLS-CRVE) or fixed effects regression with CRVE (FE-CRVE)
-
Reliable network inference from unreliable data: A tutorial on latent network modeling using STRAND. Psychological Methods (IF 10.929) Pub Date : 2023-03-06 Daniel Redhead,Richard McElreath,Cody T Ross
Social network analysis provides an important framework for studying the causes, consequences, and structure of social ties. However, standard self-report measures-for example, as collected through the popular "name-generator" method-do not provide an impartial representation of such ties, be they transfers, interactions, or social relationships. At best, they represent perceptions filtered through
-
Comparing theories with the Ising model of explanatory coherence. Psychological Methods (IF 10.929) Pub Date : 2023-03-02 Maximilian Maier,Noah van Dongen,Denny Borsboom
Theories are among the most important tools of science. Lewin (1943) already noted "There is nothing as practical as a good theory." Although psychologists discussed problems of theory in their discipline for a long time, weak theories are still widespread in most subfields. One possible reason for this is that psychologists lack the tools to systematically assess the quality of their theories. Thagard
-
Improving hierarchical models of individual differences: An extension of Goldberg's bass-ackward method. Psychological Methods (IF 10.929) Pub Date : 2023-02-13 Miriam K Forbes
Goldberg's (2006) bass-ackward approach to elucidating the hierarchical structure of individual differences data has been used widely to improve our understanding of the relationships among constructs of varying levels of granularity. The traditional approach has been to extract a single component or factor on the first level of the hierarchy, two on the second level, and so on, treating the correlations
-
Regularized continuous time structural equation models: A network perspective. Psychological Methods (IF 10.929) Pub Date : 2023-01-12 Jannik H Orzek,Manuel C Voelkle
Regularized continuous time structural equation models are proposed to address two recent challenges in longitudinal research: Unequally spaced measurement occasions and high model complexity. Unequally spaced measurement occasions are part of most longitudinal studies, sometimes intentionally (e.g., in experience sampling methods) sometimes unintentionally (e.g., due to missing data). Yet, prominent
-
Cross-level covariance approach to the disaggregation of between-person effect and within-person effect. Psychological Methods (IF 10.929) Pub Date : 2023-01-09 Kazuki Hori,Yasuo Miyazaki
In longitudinal studies, researchers are often interested in investigating relations between variables over time. A well-known issue in such a situation is that naively regressing an outcome on a predictor results in a coefficient that is a weighted average of the between-person and within-person effect, which is difficult to interpret. This article focuses on the cross-level covariance approach to
-
We need to change how we compute RMSEA for nested model comparisons in structural equation modeling. Psychological Methods (IF 10.929) Pub Date : 2023-01-09 Victoria Savalei,Jordan C Brace,Rachel T Fouladi
Comparison of nested models is common in applications of structural equation modeling (SEM). When two models are nested, model comparison can be done via a chi-square difference test or by comparing indices of approximate fit. The advantage of fit indices is that they permit some amount of misspecification in the additional constraints imposed on the model, which is a more realistic scenario. The most
-
A primer on synthesizing individual participant data obtained from complex sampling surveys: A two-stage IPD meta-analysis approach. Psychological Methods (IF 10.929) Pub Date : 2023-01-09 Diego G Campos,Mike W-L Cheung,Ronny Scherer
The increasing availability of individual participant data (IPD) in the social sciences offers new possibilities to synthesize research evidence across primary studies. Two-stage IPD meta-analysis represents a framework that can utilize these possibilities. While most of the methodological research on two-stage IPD meta-analysis focused on its performance compared with other approaches, dealing with
-
Error tight: Exercises for lab groups to prevent research mistakes. Psychological Methods (IF 10.929) Pub Date : 2023-01-02 Julia F Strand
Scientists, being human, make mistakes. We transcribe things incorrectly, we make errors in our code, and we intend to do things and then forget. The consequences of errors in research may be as minor as wasted time and annoyance, but may be as severe as losing months of work or having to retract an article. The purpose of this tutorial is to help lab groups identify places in their research workflow
-
Improving content validity evaluation of assessment instruments through formal content validity analysis. Psychological Methods (IF 10.929) Pub Date : 2023-01-02 Andrea Spoto,Massimo Nucci,Elena Prunetti,Michele Vicovaro
Content validity is defined as the degree to which elements of an assessment instrument are relevant to and representative of the target construct. The available methods for content validity evaluation typically focus on the extent to which a set of items are relevant to the target construct, but do not afford precise evaluation of items' behavior, nor their exhaustiveness with respect to the elements
-
A structural after measurement approach to structural equation modeling. Psychological Methods (IF 10.929) Pub Date : 2022-11-10 Yves Rosseel, Wen Wei Loh
In structural equation modeling (SEM), the measurement and structural parts of the model are usually estimated simultaneously. In this article, we revisit the long-standing idea that we should first estimate the measurement part, and then estimate the structural part. We call this the “structural-after-measurement” (SAM) approach to SEM. We describe a formal framework for the SAM approach under settings
-
Who is and is not “average”? Random effects selection with spike-and-slab priors. Psychological Methods (IF 10.929) Pub Date : 2022-11-03 Josue E. Rodriguez, Donald R. Williams, Philippe Rast
Mixed-effects models are often employed to study individual differences in psychological science. Such analyses commonly entail testing whether between-subjects variability exists and including covariates to explain that variability. We argue that researchers have much to gain by explicitly focusing on the individual in individual differences research. To this end, we propose the spike-and-slab prior
-
Estimating the number of factors in exploratory factor analysis via out-of-sample prediction errors. Psychological Methods (IF 10.929) Pub Date : 2022-11-03 Jonas M. B. Haslbeck, Riet van Bork
Exploratory factor analysis (EFA) is one of the most popular statistical models in psychological science. A key problem in EFA is to estimate the number of factors. In this article, we present a new method for estimating the number of factors based on minimizing the out-of-sample prediction error of candidate factor models. We show in an extensive simulation study that our method slightly outperforms
-
Estimating and investigating multiple constructs multiple indicators social relations models with and without roles within the traditional structural equation modeling framework: A tutorial. Psychological Methods (IF 10.929) Pub Date : 2022-10-13 David Jendryczko, Fridtjof W. Nussbeck
The present contribution provides a tutorial for the estimation of the social relations model (SRM) by means of structural equation modeling (SEM). In the overarching SEM-framework, the SRM without roles (with interchangeable dyads) is derived as a more restrictive form of the SRM with roles (with noninterchangeable dyads). Starting with the simplest type of the SRM for one latent construct assessed
-
Extending the actor-partner interdependence model to accommodate multivariate dyadic data using latent variables. Psychological Methods (IF 10.929) Pub Date : 2022-10-11 Hanna Kim, Jee-Seon Kim
This study extends the traditional Actor-Partner Interdependence model (APIM; Kenny, 1996) to incorporate dyadic data with multiple indicators reflecting latent constructs. Although the APIM has been widely used to model interdependence in dyads, the method and its applications have largely been limited to single sets of manifest variables. This article presents three extensions of the APIM that can
-
Distributional causal effects: Beyond an “averagarian” view of intervention effects. Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Wolfgang Wiedermann, Bixi Zhang, Wendy Reinke, Keith C. Herman, Alexander von Eye
The usefulness of mean aggregates in the analysis of intervention effectiveness is a matter of considerable debate in the psychological, educational, and social sciences. In addition to studying “average treatment effects,” the evaluation of “distributional treatment effects,” (i.e., effects that go beyond means), has been suggested to obtain a broader picture of how an intervention affects the study
-
Aberrant distortion of variance components in multilevel models under conflation of level-specific effects. Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Jason D. Rights
Methodologists have often acknowledged that, in multilevel contexts, level-1 variables may have distinct within-cluster and between-cluster effects. However, a prevailing notion in the literature is that separately estimating these effects is primarily important when there is specific interest in doing so. Consequently, in practice, researchers uninterested in disaggregating these effects (or unaware
-
Subgroup discovery in structural equation models. Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Christoph Kiefer, Florian Lemmerich, Benedikt Langenberg, Axel Mayer
Structural equation modeling is one of the most popular statistical frameworks in the social and behavioral sciences. Often, detection of groups with distinct sets of parameters in structural equation models (SEM) are of key importance for applied researchers, for example, when investigating differential item functioning for a mental ability test or examining children with exceptional educational trajectories
-
Ubiquitous bias and false discovery due to model misspecification in analysis of statistical interactions: The role of the outcome’s distribution and metric properties. Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Benjamin W. Domingue, Klint Kanopka, Sam Trejo, Mijke Rhemtulla, Elliot M. Tucker-Drob
Studies of interaction effects are of great interest because they identify crucial interplay between predictors in explaining outcomes. Previous work has considered several potential sources of statistical bias and substantive misinterpretation in the study of interactions, but less attention has been devoted to the role of the outcome variable in such research. Here, we consider bias and false discovery
-
Selecting scaling indicators in structural equation models (sems). Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Kenneth A. Bollen, Adam G. Lilly, Lan Luo
It is common practice for psychologists to specify models with latent variables to represent concepts that are difficult to directly measure. Each latent variable needs a scale, and the most popular method of scaling as well as the default in most structural equation modeling (SEM) software uses a scaling or reference indicator. Much of the time, the choice of which indicator to use for this purpose
-
Assessing the fitting propensity of factor models. Psychological Methods (IF 10.929) Pub Date : 2022-10-06 Martina Bader, Morten Moshagen
Model selection is an omnipresent issue in structural equation modeling (SEM). When deciding among competing theories instantiated as formal statistical models, a trade-off is often sought between goodness-of-fit and model parsimony. Whereas traditional fit assessment in SEM quantifies parsimony solely as the number of free parameters, the ability of a model to account for diverse data patterns—known
-
Updated guidelines on selecting an intraclass correlation coefficient for interrater reliability, with applications to incomplete observational designs. Psychological Methods (IF 10.929) Pub Date : 2022-09-01 Debby ten Hove, Terrence D. Jorgensen, L. Andries van der Ark
Several intraclass correlation coefficients (ICCs) are available to assess the interrater reliability (IRR) of observational measurements. Selecting an ICC is complicated, and existing guidelines have three major limitations. First, they do not discuss incomplete designs, in which raters partially vary across subjects. Second, they provide no coherent perspective on the error variance in an ICC, clouding
-
Reliability and omega hierarchical in multidimensional data: A comparison of various estimators. Psychological Methods (IF 10.929) Pub Date : 2022-09-01 Eunseong Cho
The current guidelines for estimating reliability recommend using two omega combinations in multidimensional data. One omega is for factor analysis (FA) reliability estimators, and the other omega is for omega hierarchical estimators (i.e., ωh). This study challenges these guidelines. Specifically, the following three questions are asked: (a) Do FA reliability estimators outperform non-FA reliability
-
Using natural language processing and machine learning to replace human content coders. Psychological Methods (IF 10.929) Pub Date : 2022-08-25 Yilei Wang, Jingyuan Tian, Yagizhan Yazar, Deniz S. Ones, Richard N. Landers
Content analysis is a common and flexible technique to quantify and make sense of qualitative data in psychological research. However, the practical implementation of content analysis is extremely labor-intensive and subject to human coder errors. Applying natural language processing (NLP) techniques can help address these limitations. We explain and illustrate these techniques to psychological researchers
-
Comparing revised latent state–trait models including autoregressive effects. Psychological Methods (IF 10.929) Pub Date : 2022-08-04 Nele Stadtbaeumer, Stefanie Kreissl, Axel Mayer
Understanding the longitudinal dynamics of behavior, their stability and change over time, are of great interest in the social and behavioral sciences. Researchers investigate the degree to which an observed measure reflects stable components of the construct, situational fluctuations, method effects, or just random measurement error. An important question in such models is whether autoregressive effects
-
Using synthetic data to improve the reproducibility of statistical results in psychological research. Psychological Methods (IF 10.929) Pub Date : 2022-08-04 Simon Grund, Oliver Lüdtke, Alexander Robitzsch
In recent years, psychological research has faced a credibility crisis, and open data are often regarded as an important step toward a more reproducible psychological science. However, privacy concerns are among the main reasons that prevent data sharing. Synthetic data procedures, which are based on the multiple imputation (MI) approach to missing data, can be used to replace sensitive data with simulated
-
Reassessment of innovative methods to determine the number of factors: A simulation-based comparison of exploratory graph analysis and next eigenvalue sufficiency test. Psychological Methods (IF 10.929) Pub Date : 2022-08-04 Nils Brandenburg, Martin Papenberg
Next Eigenvalue Sufficiency Test (NEST; Achim, 2017) is a recently proposed method to determine the number of factors in exploratory factor analysis (EFA). NEST sequentially tests the null-hypothesis that k factors are sufficient to model correlations among observed variables. Another recent approach to detect factors is exploratory graph analysis (EGA; Golino & Epskamp, 2017), which rules the number
-
A tutorial on ordinary differential equations in behavioral science: What does physics teach us? Psychological Methods (IF 10.929) Pub Date : 2022-08-01 Denis Mongin, Adriana Uribe, Stephane Cullati, Delphine S. Courvoisier
The present tutorial proposes to use concepts of physics and mathematics to help behavioral scientists to use differential equations in their studies. It focuses on the first-order and the second-order (damped oscillator) differential equation. Simple examples allow to detail the meaning of the coefficients, the conditions of applicability of these differential equations, the underlying hypothesis
-
On the white, the black, and the many shades of gray in between: Our reply to Van Ravenzwaaij and Wagenmakers (2021). Psychological Methods (IF 10.929) Pub Date : 2022-07-28 Jorge N. Tendeiro, Henk A. L. Kiers
In 2019 we wrote an article (Tendeiro & Kiers, 2019) in Psychological Methods over null hypothesis Bayesian testing and its working horse, the Bayes factor. Recently, van Ravenzwaaij and Wagenmakers (2021) offered a response to our piece, also in this journal. Although we do welcome their contribution with thought-provoking remarks on our article, we ended up concluding that there were too many “issues”
-
Comparing methods for assessing a difference in correlations with dependent groups, measurement error, nonnormality, and incomplete data. Psychological Methods (IF 10.929) Pub Date : 2022-07-28 Qian Zhang
I compared multiple methods to estimate and test a difference in correlations (ρdiff) between two variables that are repeatedly measured or originate from dyads. Fisher’s z transformed correlations are often used for testing ρdiff. However, raw scores are typically used directly to compute correlations under this popular method, whose performance has not been evaluated with measurement error or nonnormality
-
A causal theory of error scores. Psychological Methods (IF 10.929) Pub Date : 2022-07-25 Riet van Bork, Mijke Rhemtulla, Klaas Sijtsma, Denny Borsboom
In modern test theory, response variables are a function of a common latent variable that represents the measured attribute, and error variables that are unique to the response variables. While considerable thought goes into the interpretation of latent variables in these models (e.g., validity research), the interpretation of error variables is typically left implicit (e.g., describing error variables
-
Evaluating classification performance: Receiver operating characteristic and expected utility. Psychological Methods (IF 10.929) Pub Date : 2022-07-21 Yueran Yang
One primary advantage of receiver operating characteristic (ROC) analysis is considered to be its ability to quantify classification performance independently of factors such as prior probabilities and utilities of classification outcomes. This article argues the opposite. When evaluating classification performance, ROC analysis should consider prior probabilities and utilities. By developing expected