Skip to main content
Log in

Testing Moderation in Business and Psychological Studies with Latent Moderated Structural Equations

  • Original Paper
  • Published:
Journal of Business and Psychology Aims and scope Submit manuscript

Abstract

Most organizational researchers understand the detrimental effects of measurement errors in testing relationships among latent variables and hence adopt structural equation modeling (SEM) to control for measurement errors. Nonetheless, many of them revert to regression-based approaches, such as moderated multiple regression (MMR), when testing for moderating and other nonlinear effects. The predominance of MMR is likely due to the limited evidence showing the superiority of latent interaction approaches over regression-based approaches combined with the previous complicated procedures for testing latent interactions. In this teaching note, we first briefly explain the latent moderated structural equations (LMS) approach, which estimates latent interaction effects while controlling for measurement errors. Then we explain the reliability-corrected single-indicator LMS (RCSLMS) approach to testing latent interactions with summated scales and correcting for measurement errors, yielding results which approximate those from LMS. Next, we report simulation results illustrating that LMS and RCSLMS outperform MMR in terms of accuracy of point estimates and confidence intervals for interaction effects under various conditions. Then, we show how LMS and RCSLMS can be implemented with Mplus, providing an example-based tutorial to demonstrate a 4-step procedure for testing a range of latent interactions, as well as the decisions at each step. Finally, we conclude with answers to some frequently asked questions when testing latent interactions. As supplementary files to support researchers, we provide a narrated PowerPoint presentation, all Mplus syntax and output files, data sets for numerical examples, and Excel files for conducting the loglikelihood values difference test and plotting the latent interaction effects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. As in most of the discussions in SEM, measurement errors in this article refer to random measurement errors only. Systematic measurement errors, such as method effects and researcher-introduced bias, can only be accounted for using specific research designs and other modeling approaches.

  2. Equation 1 shows that either X or Z can be treated as the moderator. In other words, both independent variable and moderator are treated in the same way statistically. Theory and hypotheses should be used to determine which is the independent variable and which is the moderator (Dawson, 2014).

  3. Results presented in Tables 1 to 3 are averages across all three patterns of factor loadings. Tables showing results for each pattern of factor loadings are available in the supplementary files.

  4. The reliability level should be set a priori; a sensitivity analysis using multiple levels of reliability is not recommended because that would be exploratory (Savalei, 2019). Based on the results of a simulation study, Savalei (2019) recommended fixing the measurement error a priori by a slightly over-estimated reliability (+ 0.05) when the researcher is comfortable in guessing the reliability. However, if the reliability is underestimated (− 0.05) or overestimated by a larger extent (+ 0.15), the recommended approach will result in less accurate and less precise parameter estimates, as well as lower coverage of CI and inflated type I error when compared with either SEM or the single indicator with Cronbach’s alpha correction approaches. These negative consequences were more obvious when the sample size approached 200.

  5. Little, Slegers, and Card (2006) suggested an effects-coding method as an alternative way of identifying and scaling latent variables by fixing the sum of factor loadings across indicators to the number of indicators (for example, sum of factor loadings is fixed at 4 when there are four indicators for the construct), and the sum of indicator intercepts is fixed at 0. While the effects-coding method is equivalent to the marker variable method that fixes the factor loading of a marker variable to one by just changing the scale and intercept of the latent variable, the effects-coding method is particularly useful for interpreting latent means. However, the effects-coding method may not be appropriate for estimating latent interactions since latent variables that form the latent interaction should be centered (such that the mean is zero) to improve interpretability of the estimated regression coefficients. Besides, interpreting unstandardized regression coefficients will be more challenging when the effects-coding method is adopted. Moreover, the effects-coding method requires all indicators to have the same response scale while using the marker item method in LMS does not require this. We also caution against achieving identification by standardizing the latent variables (fixing the variance of latent variables at 1) because the standard errors of estimated parameters may be biased. Given these various issues, we recommend adopting the marker variable method to provide identification when LMS is used.

  6. Although rare, there may be instances where Model 2 does not fit the data adequately and yet Model 3 fits the data significantly better than Model 2. Unfortunately, we will not be able to tell if Model 3 fits the data adequately in such situations.

  7. However, these five levels are arbitrarily selected and if other levels (at least five levels for a continuous variable because the confidence intervals are not linear) of the moderator can be meaningfully identified, these more specific levels should be used instead. An example is when the moderator is a directly observed variable measured on a 5-point scale, then using 1, 2, 3, 4, and 5 as the five levels may be more meaningful. Note that since the moderator is mean-centered when converted into a latent variable before estimating the latent interaction effects, each level of the moderator in a simple slope test should be mean-centered as well. For example, suppose the mean of the moderator Z is 3.25 on a 5-point scale, this mean should be subtracted from each of the five levels of the moderator and the simple slope of the relationship between X and Y at Z equals 1 can be defined as the following under MODEL CONSTRAINT:

    $$ {\mathrm{Slope}}_{2\mathrm{L}}=\mathrm{b}1+\mathrm{b}3\ast \left(-2.25\right);!1-3.25=-2.25. $$
  8. Little’s MCAR test is also available in the R package – BaylorEdPsych by A Alexander Beaujean at https://www.rdocumentation.org/packages/BaylorEdPsych/versions/0.5.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordon W. Cheung.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

ESM 1

(PDF 100 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cheung, G.W., Cooper-Thomas, H.D., Lau, R.S. et al. Testing Moderation in Business and Psychological Studies with Latent Moderated Structural Equations. J Bus Psychol 36, 1009–1033 (2021). https://doi.org/10.1007/s10869-020-09717-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10869-020-09717-0

Keywords

Navigation