1932

Abstract

Despite recognition of the harmful effects of common method bias (CMB), its causes, consequences, and remedies are still not well understood. Therefore, the purpose of this article is to review our current knowledge of CMB and provide recommendations on how to control it. We organize our review into five main sections. First, we explain the harmful effects of CMB (why it is bad). Second, we discuss the complexity caused by the fact that there are multiple sources of CMB, several of which are likely to be present in any study. Third, we present evidence that the conditions under which CMB is likely to occur are relatively widespread, and fourth, we explain why CMB is not easy to fix. Finally, we identify several avenues for future research.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-110721-040030
2024-01-22
2024-05-04
Loading full text...

Full text loading...

/deliver/fulltext/organ/11/1/annurev-orgpsych-110721-040030.html?itemId=/content/journals/10.1146/annurev-orgpsych-110721-040030&mimeType=html&fmt=ahah

Literature Cited

  1. Ackerman C, Warren MA, Donaldson SI. 2018. Scaling the heights of positive psychology: systematic review of measurement scales. Int. J. Wellbeing 8:1–21
    [Google Scholar]
  2. Aguirre-Urreta MI, Hu J. 2019. Detecting common method bias: performance of the Harman's single-factor test. Database Adv. . Inf. Syst. 50:45–70
    [Google Scholar]
  3. Anseel F, Lievens F, Schollaert E, Choragwicka B. 2010. Response rates in organizational science, 1995–2008: a meta-analytic review and guidelines for survey researchers. J. Bus. Psychol. 25:335–49
    [Google Scholar]
  4. Antonakis J, Bendahan S, Jacquart P, Lalive R. 2010. On making causal claims: a review and recommendations. Leadersh. Q. 21:1086–120
    [Google Scholar]
  5. Astra RL, Singg S. 2000. The role of self-esteem in affiliation. J. Psychol. 134:15–22
    [Google Scholar]
  6. Bachrach DG, Bendoly E, Podsakoff PM. 2001. Attributions of the “causes” of group performance as an alternative explanation of the relationship between organizational citizenship behavior and organizational performance. J. Appl. Psychol. 86:1285–93
    [Google Scholar]
  7. Bagozzi RP. 1984. A prospectus for theory construction in marketing. J. Mark. 48:11–29
    [Google Scholar]
  8. Bagozzi RP 2011. Measurement and meaning in information systems and organizational research: methodological and philosophical foundations. MIS Q. 35:261–91
    [Google Scholar]
  9. Bagozzi RP, Li YJ, Phillips LW. 1991. Assessing construct-validity in organizational research. Adm. Sci. Q. 36:421–58
    [Google Scholar]
  10. Bagozzi RP, Wong N, Li YJ. 1999. The role of culture and gender in the relationship between positive and negative affect. Cogn. Emot. 13:641–72
    [Google Scholar]
  11. Bagozzi RP, Yi YJ. 1990. Assessing method variance in multitrait multimethod matrices: the case of self-reported affect and perceptions at work. J. Appl. Psychol. 75:547–60
    [Google Scholar]
  12. Barrick MR, Mount MK. 1996. Effects of impression management and self-deception on the predictive validity of personality constructs. J. Appl. Psychol. 81:261–72
    [Google Scholar]
  13. Baumgartner H, Steenkamp J. 2001. Response styles in marketing research: a cross-national investigation. J. Mark. Res. 38:143–56
    [Google Scholar]
  14. Baumgartner H, Weijters B. 2021. Dealing with common method variance in international marketing research. J. Int. Mark. 29:7–22
    [Google Scholar]
  15. Baumgartner H, Weijters B, Pieters R. 2021. The biasing effect of common method variance: some clarifications. J. Acad. Mark. Sci. 49:221–35
    [Google Scholar]
  16. Beal DJ, Weiss HM, Barros E, MacDermid SM. 2005. An episodic process model of affective influences on performance. J. Appl. Psychol. 90:1054–68
    [Google Scholar]
  17. Benitez I, He J, Van de Vijver FJR, Padilla J-L. 2016. Linking extreme response style to response processes: a cross-cultural mixed methods approach. Int. J. Psychol. 51:464–73
    [Google Scholar]
  18. Bliese PD. 2000. Within-group agreement, non-independence, and reliability: implications for data aggregation and analysis. Multilevel Theory, Research, and Methods Inorganizations: Foundations, Extensions and New Directions KJ Klein, SW Kozlowski 349–81 San Francisco, CA: Jossey-Bass
    [Google Scholar]
  19. Bodner TE. 2006. Designs, participants, and measurement methods in psychological research. Can. Psychol. 47:263–72
    [Google Scholar]
  20. Boehm SA, Kunze F, Bruch H. 2014. Spotlight on age-diversity climate: the impact of age-inclusive HR practices on firm-level outcomes. Pers. Psychol. 67:667–704
    [Google Scholar]
  21. Bollen KA. 1989. Structural Equations with Latent Variables New York: Wiley
  22. Bozionelos N, Simmering MJ. 2022. Methodological threat or myth? Evaluating the current state of evidence on common method variance in human resource management research. Hum. Resour. Manag. J. 32:194–215
    [Google Scholar]
  23. Brady DL, Brown DJ, Liang LH. 2017. Moving beyond assumptions of deviance: the reconceptualization and measurement of workplace gossip. J. Appl. Psychol. 102:1–25
    [Google Scholar]
  24. Brannick MT, Chan D, Conway JM, Lance CE, Spector PE. 2010. What is method variance and how can we cope with it? A panel discussion. Organ. Res. Methods 13:407–20
    [Google Scholar]
  25. Broesch T, Crittenden AN, Beheim BA, Blackwell AD, Bunce JA et al. 2020. Navigating cross-cultural research: methodological and ethical considerations. Proc. R. Soc. B 287:20201245
    [Google Scholar]
  26. Brown TA. 2015. Confirmatory Factor Analysis for Applied Research New York: Guilford, 2nd ed.
  27. Buckley MR, Cote JA, Comstock SM. 1990. Measurement errors in behavioral sciences: the case of personality/attitude research. Educ. Psychol. Meas. 50:447–74
    [Google Scholar]
  28. Burton-Jones A 2009. Minimizing method bias through programmatic research. MIS Q. 33:445–71
    [Google Scholar]
  29. Campbell DT, Fiske DW. 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 56:81–105
    [Google Scholar]
  30. Campbell DT, O'Connell EJ 1967. Methods factors in multitrait-multimethod matrices: multiplicative rather than additive. Multivar. Behav. Res. 2:409–26
    [Google Scholar]
  31. Chaffin D, Heidl R, Hollenbeck JR, Howe M, Yu A et al. 2017. The promise and perils of wearable sensors in organizational research. Organ. Res. Methods 20:3–31
    [Google Scholar]
  32. Chan D. 1998. Functional relations among constructs in the same content domain at different levels of analysis: a typology of composition models. J. Appl. Psychol. 83:234–46
    [Google Scholar]
  33. Chang SJ, van Witteloostuijn A, Eden L. 2010. From the editors: common method variance in international business research. J. Int. Bus. Stud. 41:178–84
    [Google Scholar]
  34. Chen PY, Dai T, Spector PE, Jex SM. 1997. Relation between negative affectivity and positive affectivity: effects of judged desirability of scale items and respondents’ social desirability. J. Personal. Assess. 69:183–98
    [Google Scholar]
  35. Cheng KHC, Hui CH, Cascio WF. 2017. Leniency bias in performance ratings: the Big-Five correlates. Front. Psychol. 8:521
    [Google Scholar]
  36. Chin WW, Thatcher JB, Wright R 2012. Assessing common method bias: problems with the ULMC technique. MIS Q. 36:1003–19
    [Google Scholar]
  37. Church AT, Willmore SL, Anderson AT, Ochiai M, Porter N et al. 2012. Cultural differences in implicit theories and self-perceptions of traitedness: replication and extension with alternative measurement formats and cultural dimensions. J. Cross-Cult. Psychol. 43:1268–96
    [Google Scholar]
  38. Chyung AY, Barkin JR, Shamsy JA. 2018. Evidence-based survey design: the use of negatively worded items in surveys. Perform. Improv. 57:16–25
    [Google Scholar]
  39. Colquitt JA, LePine JA, Zapata CP, Wild RE. 2011. Trust in typical and high-reliability contexts: building and reacting to trust among firefighters. Acad. Manag. J. 54:999–1015
    [Google Scholar]
  40. Connolly JJ, Viswesvaran C. 2000. The role of affectivity in job satisfaction: a meta-analysis. Personal. Individ. Differ. 29:265–81
    [Google Scholar]
  41. Cooper B, Eva N, Fazlelahi FZ, Newman A, Lee A, Obschonka M. 2020. Addressing common method variance and endogeneity in vocational behavior research: a review of the literature and suggestions for future research. J. Vocat. Behav. 121:103472
    [Google Scholar]
  42. Cortina JM, Sheng Z, Keener KK, Keeler KR, Grubb LH et al. 2020. From alpha to omega and beyond! A look at the past, present, and (possible) future of psychometric soundness in the Journal of Applied Psychology. J. Appl. Psychol. 105:1351–81
    [Google Scholar]
  43. Cote JA, Buckley MR. 1987. Estimating trait, method, and error variance: generalizing across 70 construct-validation studies. J. Mark. Res. 24:315–18
    [Google Scholar]
  44. Cote JA, Buckley MR. 1988. Measurement error and theory testing in consumer research: an illustration of the importance of construct-validation. J. Consum. Res. 14:579–82
    [Google Scholar]
  45. Craighead CW, Ketchen DJ, Dunn KS, Hult GTM. 2011. Addressing common method variance: guidelines for survey research on information technology, operations, and supply chain management. IEEE Trans. Eng. Manag. 58:578–88
    [Google Scholar]
  46. Cram WA, D'Arcy J, Proudfoot JG. 2019. Seeing the forest and the trees: a meta-analysis of the antecedents to information security policy compliance. MIS Q. 43:525–54
    [Google Scholar]
  47. Crampton SM, Wagner JA. 1994. Percept-percept inflation in microorganizational research: an investigation of prevalence and effect. J. Appl. Psychol. 79:67–76
    [Google Scholar]
  48. Cruz KS. 2022. Are you asking the correct person (hint: Oftentimes you are not!)? Stop worrying about unfounded common method bias arguments and start using my guide to make better decisions of when to use self- and other-reports. Group Organ. Manag. 47:920–27
    [Google Scholar]
  49. Cui TX, Kam CCS, Cheng EH, Ho MY. 2022. Distinguishing between trait desirability and item desirability in predicting item scores: Is informant evaluation of personality free from social desirability?. Personal. Individ. Differ. 196:111708
    [Google Scholar]
  50. Dalal RS. 2005. A meta-analysis of the relationship between organizational citizenship behavior and counterproductive work behavior. J. Appl. Psychol. 90:1241–55
    [Google Scholar]
  51. Deng H, Lam CK, Guan Y, Wang M. 2021. My fault or yours? Leaders’ dual reactions to abusive supervision via rumination depend on their independent self-construal. . Pers. Psychol. 74:773–98
    [Google Scholar]
  52. Detert JR, Edmondson AC. 2011. Implicit voice theories: taken-for-granted rules of self-censorship at work. Acad. Manag. J. 54:461–88
    [Google Scholar]
  53. Doty DH, Glick WH. 1998. Common methods bias: Does common methods variance really bias results?. Organ. Res. Methods 1:374–406
    [Google Scholar]
  54. Eden D, Leviatan U. 1975. Implicit leadership theory as a determinant of factor structure underlying supervisory behavior scales. J. Appl. Psychol. 60:736–41
    [Google Scholar]
  55. Edwards AL. 1953. The relationship between the judged desirability of a trait and the probability that the trait will be endorsed. . J. Appl. Psychol. 37:90–93
    [Google Scholar]
  56. Edwards AL. 1957. The Social Desirability Variable in Personality Assessment and Research New York: Dryden
  57. Evans MG. 1985. A Monte-Carlo study of the effects of correlated method variance in moderated multiple-regression analysis. Organ. Behav. Hum. Decis. Process. 36:305–23
    [Google Scholar]
  58. Fornell C, Larcker DF. 1981. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18:39–50
    [Google Scholar]
  59. Fuller CM, Simmering MJ, Atinc G, Atinc Y, Babin BJ. 2016. Common methods variance detection in business research. J. Bus. Res. 69:3192–98
    [Google Scholar]
  60. Gabriel AS, Podsakoff NP, Beal DJ, Scott BA, Sonnentag S et al. 2019. Experience sampling methods: a discussion of critical trends and considerations for scholarly advancement. Organ. Res. Methods 22:969–1006
    [Google Scholar]
  61. Ganster DC, Rosen CC. 2013. Work stress and employee health: a multidisciplinary view. J. Manag. 39:1085–122
    [Google Scholar]
  62. Gielnik MM, Zacher H, Wang M. 2018. Age in the entrepreneurial process: the role of future time perspective and prior entrepreneurial experience. J. Appl. Psychol. 103:1067–85
    [Google Scholar]
  63. Grant AM, Hofmann DA. 2011. Outsourcing inspiration: the performance effects of ideological messages from leaders and beneficiaries. Organ. Behav. Hum. Decis. Process. 116:173–87
    [Google Scholar]
  64. Greenberger E, Chen C, Dmitieva J, Farruggia SP. 2003. Item-wording and the dimensionality of the Rosenberg self-esteem scale: Do they matter?. Personal. Individ. Differ. 35:1241–54
    [Google Scholar]
  65. Greenwald AG, Farnham SD. 2000. Using the Implicit Association Test to measure self-esteem and self-concept. . J. Personal. Soc. Psychol. 79:1022–38
    [Google Scholar]
  66. Harris MM, Bladen A. 1994. Wording effects in the measurement of role-conflict and role ambiguity: a multitrait-multimethod analysis. . J. Manag. 20:887–901
    [Google Scholar]
  67. Harrison DA, McLaughlin ME. 1993. Cognitive-processes in self-report responses: tests of item context effects in work attitude measures. J. Appl. Psychol. 78:129–40
    [Google Scholar]
  68. Harrison DA, McLaughlin ME, Coalter TM. 1996. Context, cognition, and common method variance: psychometric and verbal protocol evidence. Organ. Behav. Hum. Decis. Process. 68:246–61
    [Google Scholar]
  69. Harvey RJ, Billings RS, Nilan KJ. 1985. Confirmatory factor-analysis of the job diagnostic survey: good news and bad news. J. Appl. Psychol. 70:461–68
    [Google Scholar]
  70. Harzing AW. 2006. Response styles in cross-national survey research: a 26-country study. Int. J. Cross-Cult. Manag. 6:243–66
    [Google Scholar]
  71. Harzing AW, Brown M, Koster K, Zhao S. 2012. Response style differences in cross-cultural research: dispositional and situational determinants. Manag. Int. Rev. 52:341–63
    [Google Scholar]
  72. Heberlein TA, Baumgartner R. 1978. Factors affecting response rates to mailed questionnaires: a quantitative analysis of the published literature. Am. Sociol. Rev. 43:447–62
    [Google Scholar]
  73. Heggestad ED, Scheaf DJ, Banks GC, Hausfeld MM, Tonidandel S, Williams EB. 2019. Scale adaptation in organizational science research: a review and best-practice recommendations. J. Manag. 45:2596–27
    [Google Scholar]
  74. Henderson DJ, Liden RC, Gibkowski BC, Chaudry A. 2009. LMX differentiation: a multilevel review and examination of its antecedents and outcomes. Leadersh. Q. 20:517–34
    [Google Scholar]
  75. Hopp WJ, Iravani SMR, Liu F. 2009. Managing white-collar work: an operations-oriented survey. Prod. Oper. Manag. 18:1–32
    [Google Scholar]
  76. Huang C. 2013. Relation between self-esteem and socially desirable responding and the role of socially desirable responding in the relation between self-esteem and performance. Eur. J. Psychol. Educ. 28:663–83
    [Google Scholar]
  77. Hulland J, Baumgartner H, Smith KM. 2018. Marketing survey research best practices: evidence and recommendations from a review of JAMS articles. J. Acad. Mark. Sci. 46:92–108
    [Google Scholar]
  78. Ibrahim AM. 2001. Differential responding to positive and negative items: the case of a negative item in a questionnaire for course and faculty evaluation. Psychol. Rep. 88:497–500
    [Google Scholar]
  79. Jakobsen M, Jensen R. 2015. Common method bias in public management studies. Int. Public Manag. J. 18:3–30
    [Google Scholar]
  80. Janiszewski C, Wyer RS Jr. 2014. Content and process priming: a review. J. Consum. Psychol. 24:96–118
    [Google Scholar]
  81. Johnson RE, Rosen CC, Djurdjevic E. 2011. Assessing the impact of common method variance on higher order multidimensional constructs. J. Appl. Psychol. 96:744–61
    [Google Scholar]
  82. Jordan PJ, Troth AC. 2020. Common method bias in applied settings: the dilemma of researching in organizations. Aust. J. Manag. 45:3–14
    [Google Scholar]
  83. Joseph DL, Chan MY, Heintzlman SJ, Tay L, Diener E, Scotney VS. 2020. The manipulation of affect: a meta-analysis of affect induction procedures. Psychol. Bull. 146:355–75
    [Google Scholar]
  84. Judd C, Drake R, Downing J, Krosnick JA. 1991. Some dynamic properties of attitude structures: context-induced response facilitation and polarization. J. Personal. Soc. Psychol. 60:193–202
    [Google Scholar]
  85. Kaltsonoudi K, Tsigilis N, Karteroliotis K. 2021. Critical review of the literature and current tendencies of the common method variance in sport management research. Meas. Phys. Educ. Exerc. Sci. 26:103–15
    [Google Scholar]
  86. Kam CCS. 2013. Probing item social desirability by correlating personality items with Balanced Inventory of Desirable Responding (BIDR): a validity examination. Personal. Individ. Differ. 54:513–18
    [Google Scholar]
  87. Kam CCS, Meyer JP. 2015. Implications of item keying and item valence for the investigation of construct dimensionality. Multivar. Behav. Res. 50:457–69
    [Google Scholar]
  88. Kaufmann L, Saw AA. 2014. Using a multiple-informant approach in SCM research. Int. J. Phys. Distrib. Logist. Manag. 44:511–27
    [Google Scholar]
  89. Kenny DA, Kashy DA. 1992. Analysis of multitrait-multimethod matrix by confirmatory factor analysis. Psychol. Bull. 112:165–72
    [Google Scholar]
  90. Ketokivi MA, Schroeder RG. 2004. Perceptual measures of performance: fact or fiction?. J. Oper. Manag. 22:247–64
    [Google Scholar]
  91. Kock F, Berbekova A, Assaf AG. 2021. Understanding and managing the threat of common method bias: detection, prevention, and control. Tour. Manag. 86:104330
    [Google Scholar]
  92. Krosnick JA. 1991. Response strategies for coping with the demands of attitude measures in surveys. Appl. Cogn. Psychol. 5:213–36
    [Google Scholar]
  93. Krosnick JA. 1999. Survey research. Annu. Rev. Psychol. 50:537–67
    [Google Scholar]
  94. Lai X, Li FL, Leung K. 2013. A Monte Carlo study of the effects of common method variance on significance testing and parameter bias in Hierarchical Linear Modeling. Organ. Res. Methods 16:243–46
    [Google Scholar]
  95. Lalwani AK, Shrum LJ, Chiu C-Y. 2009. Motivated response styles: the role of cultural values, regulatory focus, and self-consciousness in socially desirable responding. J. Personal. Soc. Psychol. 96:870–82
    [Google Scholar]
  96. Lambert LS, Gray T, Davis A, Erdman M, McDermott R. 2022. An overlooked aspect of measurement: Does the content of verbal anchors matter? Paper presented at Society for Industrial and Organizational Psychology Annual Conference Seattle, WA: April 27–30
  97. Lance CE, Dawson B, Birkelbach D, Hoffman BJ. 2010. Method effects, measurement error, and substantive conclusions. Organ. Res. Methods 13:435–55
    [Google Scholar]
  98. Le H, Schmidt FL, Putka DJ. 2009. The multifaceted nature of measurement artifacts and its implications for estimating construct-level relationships. . Organ. Res. Methods 12:165–200
    [Google Scholar]
  99. Lelkes Y, Krosnick JA, Max D, Judd C, Park B 2012. Complete anonymity compromises the accuracy of self-reports. J. Exp. Soc. Psychol. 48:1291–99
    [Google Scholar]
  100. Lench HC, Flores SA, Bench SW. 2011. Discrete emotions predict changes in cognition, judgment, experience, behavior, and physiology: a meta-analysis of experimental emotion elicitations. . Psychol. Bull. 137:834–55
    [Google Scholar]
  101. Liang H, Saraf N, Hu Q, Xue Y 2007. Assimilation of enterprise systems: the effect of institutional pressures and the mediating role of top management. MIS Q. 31:59–87
    [Google Scholar]
  102. Lim S, Ilies R, Koopman J, Christoforou P, Arvey RD. 2018. Emotional mechanisms linking incivility at work to aggression and withdrawal at home: an experience-sampling study. J. Manag. 44:2888–908
    [Google Scholar]
  103. Lindell MK, Whitney DJ. 2001. Accounting for common method variance in cross-sectional research designs. J. Appl. Psychol. 86:114–21
    [Google Scholar]
  104. Little TD, Lindenberger U, Nesselrode JR. 1999. On selecting indicators for multivariate measurement and modeling of latent variables: when ‘good’ indicators are bad and ‘bad’ indicators are good. Psychol. Methods 4:192–211
    [Google Scholar]
  105. Loi TI, Kuhn KM, Sahaym A, Butterfield KD, Tripp TM. 2020. From helping hands to harmful acts: when and how employee volunteering promotes workplace deviance. J. Appl. Psychol. 105:944–58
    [Google Scholar]
  106. Lord RG, Binning JF, Rush MC, Thomas JC. 1978. The effect of performance cues and leader-behavior on questionnaire ratings of leadership behavior. Organ. Behav. Hum. Perform. 21:27–39
    [Google Scholar]
  107. MacKenzie SB, Podsakoff PM. 2012. Common method bias in marketing: causes, mechanisms, and procedural remedies. J. Retail. 88:542–55
    [Google Scholar]
  108. MacKenzie SB, Podsakoff PM, Podsakoff NP 2011. Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Q. 35:293–34
    [Google Scholar]
  109. Magazine SL, Williams LJ, Williams ML. 1996. A confirmatory factor analysis examination of reverse coding effects in Meyer and Allen's affective and continuance commitment scales. Educ. Psychol. Meas. 56:241–50
    [Google Scholar]
  110. Martin R, Thomas G, Legood A, Dello Russo S. 2018. Leader–member exchange (LMX) differentiation and work outcomes: conceptual clarification and critical review. J. Organ. Behav. 39:151–68
    [Google Scholar]
  111. Mathews BP, Shepherd JL. 2002. Dimensionality of Cook and Wall's 1980 British organizational commitment questionnaire. J. Occup. Organ. Psychol. 75:369–75
    [Google Scholar]
  112. Mathieu JE, Aguinis H, Culpepper SA, Chen G. 2012. Understanding and estimating the power to detect cross-level interaction effects in multilevel modeling. J. Appl. Psychol. 97:951–66
    [Google Scholar]
  113. Mathieu JE, Chen G. 2011. The etiology of the multilevel paradigm in management research. J. Manag. 37:610–41
    [Google Scholar]
  114. McDermott MS, Sharma R. 2017. Evaluating the impact of method bias in health behaviour research: a meta-analytic examination of studies utilising the theories of reasoned action and planned behaviour. Health Psychol. Rev. 11:358–73
    [Google Scholar]
  115. Meier KJ, O'Toole LJ 2013. Subjective organizational performance and measurement error: common source bias and spurious relationships. J. Public Adm. Res. Theory 23:429–56
    [Google Scholar]
  116. Messick S. 1991. Psychology and methodology of response styles. Improving the Inquiry in Social Science: A Volume in Honor of Lee J. Cronbach RE Snow, DE Wiley 161–200 Hillsdale, NJ: Erlbaum
    [Google Scholar]
  117. Min H, Park J, Kim HJ. 2016. Common method bias in hospitality research: a critical review of literature and an empirical study. Int. J. Hosp. Manag. 56:126–35
    [Google Scholar]
  118. Mishra DP. 2000. An empirical assessment of measurement error in health-care survey research. J. Bus. Res. 48:193–205
    [Google Scholar]
  119. Mitchell TR, James LR. 2001. Building better theory: time and the specification of when things happen. Acad. Manag. Rev. 26:530–47
    [Google Scholar]
  120. Modem R, Lakshminarayanan S, Pillai R, Prabhu N. 2022. Twenty-five years of career growth literature: a review and research agenda. Ind. Commer. Train. 54:152–82
    [Google Scholar]
  121. Montabon F, Daugherty PJ, Chen HZ. 2018. Setting standards for single respondent survey design. J. Supply Chain Manag. 54:35–41
    [Google Scholar]
  122. Morgeson FP, Hofmann DA. 1999. The structure and function of collective constructs: implications for multilevel research and theory development. Acad. Manag. Rev. 24:249–65
    [Google Scholar]
  123. Nederhof AJ. 1985. Methods of coping with social desirability bias: a review. Eur. J. Soc. Psychol. 15:263–80
    [Google Scholar]
  124. Nifadkar S, Tsui AS, Ashforth BE. 2012. The way you make me feel and behave: supervisor-triggered newcomer affect and approach-avoidance behavior. Acad. Manag. J. 55:1146–68
    [Google Scholar]
  125. Ones DS, Viswesvaran C, Reiss AD. 1996. Role of social desirability in personality testing for personnel selection: the red herring. J. Appl. Psychol. 81:660–79
    [Google Scholar]
  126. Ostroff C, Kinicki AJ, Clark MA. 2002. Substantive and operational issues of response bias across levels of analysis: an example of climate-satisfaction relationships. J. Appl. Psychol. 87:355–68
    [Google Scholar]
  127. Phillips JS, Lord RG. 1986. Notes on the practical and theoretical consequences of implicit theories for the future of leadership research. J. Manag. 12:31–41
    [Google Scholar]
  128. Podsakoff NP, Maynes TD, Whiting SW, Podsakoff PM. 2015. One (rating) from many (observations): factors affecting the individual assessment of voice behavior in groups. J. Appl. Psychol. 100:1189–202
    [Google Scholar]
  129. Podsakoff NP, Spoelma TM, Chawla N, Gabriel AS. 2019. What predicts within-person variance in applied psychology constructs? An empirical examination. J. Appl. Psychol. 101:727–54
    [Google Scholar]
  130. Podsakoff NP, Whiting SW, Welsh DT, Mai KM. 2013. Surveying for “artifacts”: the susceptibility of the OCB–performance evaluation relationship to common rater, item, and measurement context effects. J. Appl. Psychol. 98:863–74
    [Google Scholar]
  131. Podsakoff PM, MacKenzie SB, Lee JY, Podsakoff NP. 2003. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88:879–903
    [Google Scholar]
  132. Podsakoff PM, MacKenzie SB, Podsakoff NP. 2012. Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 63:539–69
    [Google Scholar]
  133. Podsakoff PM, Organ DW. 1986. Self-reports in organizational research: problems and prospects. J. Manag. 12:531–44
    [Google Scholar]
  134. Pujol-Cols L, Lazzaro-Salazar M. 2021. Ten years of research on psychosocial risks, health, and performance in Latin America: a comprehensive systematic review and research agenda. J. Work Organ. Psychol. 37:187–202
    [Google Scholar]
  135. Rafferty AE, Griffin MA. 2004. Dimensions of transformational leadership: conceptual and empirical extensions. Leadersh. Q. 15:329–54
    [Google Scholar]
  136. Rafferty AE, Griffin MA. 2006. Refining individualized consideration: distinguishing developmental leadership and supportive leadership. J. Occup. Organ. Psychol. 79:37–61
    [Google Scholar]
  137. Richardson HA, Simmering MJ, Sturman MC. 2009. A tale of three perspectives: examining post hoc statistical techniques for detection and correction of common method variance. Organ. Res. Methods 12:762–800
    [Google Scholar]
  138. Roth PL, BeVier CA. 1998. Response rates in HRM/OB survey research: norms and correlates, 1990–1994. J. Manag. 24:97–117
    [Google Scholar]
  139. Rotundo M, Sackett PR. 2002. The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: a policy-capturing approach. J. Appl. Psychol. 87:66–80
    [Google Scholar]
  140. Rush MC, Thomas JC, Lord RG. 1977. Implicit leadership theory: a potential threat to the internal validity of leader behavior questionnaires. Organ. Behav. Hum. Perform. 20:93–110
    [Google Scholar]
  141. Sackett PR, Larson JR Jr. 1990. Research strategies and tactics in industrial and organizational psychology. Handbook of Industrial and Organizational Psychology MD Dunnette, LM Hough 419–89 Palo Alto, CA: Consult. Psychol.
    [Google Scholar]
  142. Schaubroeck J, Ganster DC, Fox ML. 1992. Dispositional affect and work-related stress. J. Appl. Psychol. 77:322–35
    [Google Scholar]
  143. Schmitt N. 1994. Method bias: the importance of theory and measurement. J. Organ. Behav. 15:393–98
    [Google Scholar]
  144. Schmitt N, Stults DM. 1986. Methodology review: analysis of multitrait-multimethod matrices. Appl. Psychol. Meas. 10:1–22
    [Google Scholar]
  145. Schriesheim CA. 1981a. The effect of grouping and randomizing items on leniency response bias. Educ. Psychol. Meas. 41:401–11
    [Google Scholar]
  146. Schriesheim CA. 1981b. Leniency effects on convergent and discriminant validity for grouped questionnaire items: a further investigation. Educ. Psychol. Meas. 41:1093–99
    [Google Scholar]
  147. Schulte M, Ostroff C, Shmulyian S, Kinicki A. 2009. Organizational climate configurations: relationships to collective attitudes, customer satisfaction, and financial performance. J. Appl. Psychol. 94:618–34
    [Google Scholar]
  148. Schwarz A, Rizzuto T, Carraher-Wolverton C, Roldan JL, Barrera-Barrera R. 2017. Examining the impact and detection of the “Urban Legend” of common method bias. Database Adv. Inf. Syst. 48:93–119
    [Google Scholar]
  149. Scott BA, Lennard AC, Mitchell RL, Johnson RE. 2020. Emotions naturally and laboriously expressed: antecedents, consequences, and the role of valence. Pers. Psychol. 763:587–613
    [Google Scholar]
  150. Sessions H, Nahrgang JD, Newton DW, Chamberlin M. 2020. I'm tired of listening: the effects of supervisor appraisals of group voice on supervisor emotional exhaustion and performance. J. Appl. Psychol. 105:619–36
    [Google Scholar]
  151. Shipp AJ, Cole MS. 2015. Time in individual-level organizational studies: What is it, how is it used, and why isn't it exploited more often?. Annu. Rev. Organ. Psychol. Organ. Behav. 2:237–60
    [Google Scholar]
  152. Shipp AJ, Jansen KJ. 2021. The “other” time: a review of the subjective experience of time in organizations. Acad. Manag. Annu. 15:299–334
    [Google Scholar]
  153. Siemsen E, Roth A, Oliveira P. 2010. Common method bias in regression models with linear, quadratic, and interaction effects. Organ. Res. Methods 13:456–76
    [Google Scholar]
  154. Simmering MJ, Fuller CM, Richardson HA, Ocal Y, Atinc GM. 2015. Marker variable choice, reporting, and interpretation in the detection of common method variance: a review and demonstration. Organ. Res. Methods 18:473–511
    [Google Scholar]
  155. Smith CA, Organ DW, Near JP. 1983. Organizational citizenship behavior: its nature and antecedents. J. Appl. Psychol. 68:653–63
    [Google Scholar]
  156. Smither JW, Collins H, Buda R. 1989. When ratee satisfaction influences performance evaluations: a case of illusory correlation. J. Appl. Psychol. 74:599–605
    [Google Scholar]
  157. Spector PE. 1987. Method variance as an artifact in self-reported affect and perceptions at work: myth or significant problem?. J. Appl. Psychol. 72:438–43
    [Google Scholar]
  158. Spector PE. 2006. Method variance in organizational research: truth or urban legend?. Organ. Res. Methods 9:221–32
    [Google Scholar]
  159. Spector PE. 2019. Do not cross me: optimizing the use of cross-sectional designs. J. Bus. Psychol. 34:125–37
    [Google Scholar]
  160. Spector PE, Bauer JA, Fox S. 2010. Measurement artifacts in the assessment of counterproductive work behavior and organizational citizenship behavior: Do we know what we think we know?. J. Appl. Psychol. 95:781–90
    [Google Scholar]
  161. Spector PE, Brannick MT. 2010. Common method issues: an introduction to the feature topic in Organizational Research Methods. Organ. Res. Methods 13:403–6
    [Google Scholar]
  162. Spector PE, Gray CE, Rosen CC. 2022. Are biasing factors idiosyncratic to measures? A comparison of interpersonal conflict, organizational constraints, and workload. J. Bus. Psychol. 38:983–1002
    [Google Scholar]
  163. Spector PE, Nixon AE. 2019. How often do I agree: an experimental test of item format method variance in stress measures. . Occup. Health Sci. 3:125–43
    [Google Scholar]
  164. Spector PE, Pindek S. 2016. The future of research methods in work and occupational health psychology. Appl. Psychol. 65:412–31
    [Google Scholar]
  165. Spector PE, Rosen CC, Richardson HA, Williams LJ, Johnson RE. 2019. A new perspective on method variance: a measure-centric approach. J. Manag. 45:855–80
    [Google Scholar]
  166. Staw BM. 1975. Attribution of the “causes” of performance: a general alternative interpretation of cross-sectional research on organizations. Organ. Behav. Hum. Perform. 13:414–32
    [Google Scholar]
  167. Steenkamp JBEM, Maydeu-Olivares A. 2021. An updated paradigm for evaluating measurement invariance incorporating common method variance and its assessment. J. Acad. Mark. Sci. 49:5–29
    [Google Scholar]
  168. Thomas KW, Kilmann RH. 1975. The social desirability variable in organizational research: alternative explanation for reported findings. Acad. Manag. J. 18:741–52
    [Google Scholar]
  169. Thoresen CJ, Kaplan SA., Barsky AP, Warren CR, de Chermont K. 2003. The affective underpinnings of job perceptions and attitudes: a meta-analytic review and integration. Psychol. Bull. 129:914–45
    [Google Scholar]
  170. Tourangeau R, Rasinski K, D'Andrade R. 1991. Attitude structure and belief accessibility. J. Exp. Soc. Psychol. 27:48–75
    [Google Scholar]
  171. Wang TY, Bansal P. 2012. Social responsibility in new ventures: profiting from a long-term orientation. Strateg. Manag. J. 33:1135–53
    [Google Scholar]
  172. Weijters B, Baumgartner H. 2012. Misresponse to reversed and negated items in surveys: a review. . J. Mark. Res. 49:737–47
    [Google Scholar]
  173. Weijters B, De Beuckler A, Baumgartner H. 2014. Discriminant validity where there should be none: positioning same-scale items in separated blocks of a questionnaire. Appl. Psychol. Meas. 38:450–63
    [Google Scholar]
  174. Weijters B, Geuens M, Schillewaert N. 2009. The proximity effect: the role of inter-item distance on reverse-item bias. Int. J. Res. Mark. 26:2–12
    [Google Scholar]
  175. Weijters B, Geuens M, Schillewaert N. 2010. The individual consistency of acquiescence and extreme response style in self-report questionnaires. Appl. Psychol. Meas. 34:105–21
    [Google Scholar]
  176. Weijters B, Schillewaert N, Geuens M. 2008. Assessing response styles across modes of data collection. J. Acad. Mark. Sci. 36:409–22
    [Google Scholar]
  177. Williams LJ, Anderson SE. 1994. An alternative approach to method effects by using latent-variable models: applications in organizational-behavior research. J. Appl. Psychol. 79:323–31
    [Google Scholar]
  178. Williams LJ, Cote JA, Buckley MR. 1989. Lack of method variance in self-reported affect and perceptions at work: reality or artifact. J. Appl. Psychol. 74:462–68
    [Google Scholar]
  179. Williams LJ, Gavin MB, Williams ML. 1996. Measurement and nonmeasurement processes with negative affectivity and employee attitudes. J. Appl. Psychol. 81:88–101
    [Google Scholar]
  180. Williams LJ, Hartman N, Cavazotte F. 2010. Method variance and marker variables: a review and comprehensive CFA marker technique. Organ. Res. Methods 13:477–514
    [Google Scholar]
  181. Williams LJ, McGonagle AK. 2016. Four research designs and a comprehensive analysis strategy for investigating common method variance with self-report measures using latent variables. J. Bus. Psychol. 31:339–59
    [Google Scholar]
  182. Wilson V, Srite M, Loiacono E. 2021. The effects of item ordering on reproducibility in information systems online survey research. Commun. Assoc. Inf. Syst. 49:41
    [Google Scholar]
  183. Wothke W, Browne MW. 1990. The direct-product model for the MTMM matrix parameterized as a 2nd-order factor-analysis model. Psychometrika 55:255–62
    [Google Scholar]
  184. Yao MH, Xu YJ. 2021. Method bias mechanisms and procedural remedies. Sociol. Methods Res In press https://doi.org/10.1177/00491241211043141
    [Google Scholar]
  185. Zhang W, Yuan G, Xue R, Han Y, Taylor JE. 2022. Mitigating common method bias in construction engineering and management research. J. Constr. Eng. Manag. 148:0402208
    [Google Scholar]
  186. Zeng B, Wen H, Zhang J. 2020. How does the valence of wording affect features of a scale? The method effects in the undergraduate learning burnout scale. Front. Psychol. 11:585179
    [Google Scholar]
  187. Zhu D, Doan T, Kanjanakan P, Kim PB. 2022. The impact of emotional intelligence on hospitality employees’ work outcomes: a systematic and meta-analytical review. J. Hosp. Mark. Manag. 31:326–47
    [Google Scholar]
  188. Zohar D, Luria G. 2004. Climate as a social-cognitive construction of supervisor safety practices: scripts as proxy of behavior patterns. J. Appl. Psychol. 89:322–33
    [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-110721-040030
Loading
/content/journals/10.1146/annurev-orgpsych-110721-040030
Loading

Data & Media loading...

Supplemental Material

Supplementary Data

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error