Skip to main content
Log in

Evaluating individual differences in rewarded Stroop performance: reliability and associations with self-report measures

  • Original Article
  • Published:
Psychological Research Aims and scope Submit manuscript

Abstract

In three separate experiments, we examined the reliability of and relationships between self-report measures and behavioral response time measures of reward sensitivity. Using a rewarded-Stroop task we showed that reward-associated, but task-irrelevant, information interfered with task performance (MIRA) in all three experiments, but individual differences in MIRA were unreliable both within-session and over a period of approximately 4 weeks, providing clear evidence that it is not a good individual differences measure. In contrast, when the task-relevant information was rewarded, individual differences in performance benefits were remarkably reliable, even when examining performance one year later, and with a different version of a rewarded Stroop task. Despite the high reliability of the behavioral measure of reward responsiveness, behavioral reward responsiveness was not associated with self-reported reward responsiveness scores using validated questionnaires but was associated with greater self-reported self-control. Results are discussed in terms of what is actually being measured in the rewarded Stroop task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Only two of the conditions are included in the calculation of MIRA and the smaller number of trials per participant might have led to artificially reduced reliability for MIRA relative to the reward responsiveness measure that contains many more trials. To test this, separate correlations were conducted with a modified reward responsiveness measure that only included average RTs from two conditions: RTs to no-reward trials where the Stroop condition was incongruent, and the word was not associated with reward minus RTs to potential-reward trials where the Stroop condition was incongruent, and the word was not associated with reward. This allowed for the same number of trials to be included in the reward responsiveness and MIRA measures. In all experiments, reward responsiveness reliabilities remained remarkably high and consistent despite the reduced number of trials. Therefore, the lower reliability for the MIRA measure relative to the reward responsiveness measure is not simply a result of the fewer number of trials per participant.

  2. One alternative method to calculating difference scores is the residualized measure approach. To calculate the residualized MIRA measure, the same conditions were included where the average RTs when the word was reward-unrelated was regressed on average RTs when the word was reward-related, and the standardized residuals were saved for further analyses. Correlations between original and residualized MIRA measures were very high (r’s > .87), and reliabilities for the residualized MIRA measure remained poor in all experiments.

References

Download references

Acknowledgements

The work was supported by a Canadian Graduate Scholarship from the Natural Sciences and Engineering Research Council of Canada (NSERC) to the first author, and by a grant from NSERC to the second author. We thank Carly Lundale, Daniella Zambito and Lauren Kremble for their assistance with data collection.

Funding

The Natural Sciences and Engineering Research Council (NSERC) provided Discovery grant funding to KA and scholarship funding to BP.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karen M. Arnell.

Ethics declarations

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pitchford, B., Arnell, K.M. Evaluating individual differences in rewarded Stroop performance: reliability and associations with self-report measures. Psychological Research 87, 686–703 (2023). https://doi.org/10.1007/s00426-022-01689-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00426-022-01689-5

Navigation