Skip to main content
Log in

Strategies to detect invalid performance in cognitive testing: An updated and extended meta-analysis

  • Published:
Current Psychology Aims and scope Submit manuscript

Abstract

This review updates previous meta-analytical findings on validity indicators and provides new evidence on moderators of invalid performance, by investigating differences between noncredible and credible performances of clinical and non-clinical participants. Data from 133 studies (50 from previous meta-analyses and 83 new articles) were extracted and analyzed regarding types of research design, coaching, stimuli, and detection strategies. Overall effects were largest for experimental studies with non-clinical simulators vs community controls (Mean d = 1.648, 95% CI = 1.46–1.835, k = 41), and clinical simulators vs clinical controls (Mean d = 1.728, 95% CI = 1.224–2.232, k = 6), followed by known-groups comparisons (Mean d = 1.06, 95% CI = .955–1.166, k = 50) and experimental studies with community simulators vs patients (Mean d = .877, 95% CI = .751–1.004, k = 53). Similar to previous findings, symptom-coaching proved more effective than test-coaching in reducing differences between non-clinical simulators and clinical patients. In addition to the previous reviews, the analysis of stimuli material demonstrated the largest effects and resistance to coaching for tasks using numbers and letters & symbols. The analysis of detection strategies across types of contrasts, instruments, and coaching yielded the largest effects for Recognition. Effects were moderate for Magnitude of error, Performance curve, and Recall, lower and more variable for Reaction time, Floor effect, and Consistency, with stand-alone indicators generally proving larger differences than embedded indices. Methodological and practical implications are discussed related to testing validity indicators in research and associating them in assessment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availability

The data that support the findings of this study are openly available in OSF, link for anonymous peer review https://osf.io/rtypw/?view_only=0718a7b2ecc045b9aeabc0cb40688a5c

References

Download references

Funding

No funding was available for this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iulia Crişan.

Ethics declarations

Conflict of Interest

The authors report no potential conflict of interest.

Ethical Approval

All performed procedures were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Consent to Participate

As we used exclusively published data that was publicly available, no explicit informed consent was obtained from individual participants included in the study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

ESM 1

(DOCX 113 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Crişan, I., Maricuţoiu, LP. & Sava, FA. Strategies to detect invalid performance in cognitive testing: An updated and extended meta-analysis. Curr Psychol 42, 3236–3257 (2023). https://doi.org/10.1007/s12144-021-01659-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12144-021-01659-x

Keywords

Navigation