Skip to main content
Original Article

Practical Considerations for Conducting Job Analysis Linkage Exercises

Published Online:https://doi.org/10.1027/1866-5888/a000191

Abstract. To follow best practices in creating selection tools, an important phase in job analysis is gathering linkage ratings between knowledge, skills, abilities, and other characteristics (KSAOs) and job tasks. However, the literature provides little guidance on best practices for collecting linkage ratings. Two studies were conducted to contribute to the limited research. Study 1 examined the interrater agreement of different types of raters: job incumbents, managers, or job analysts. Results revealed that job analysts have the highest interrater agreement. Study 2 examined the impact of a frame-of-reference (FOR) training for raters. Results suggested that a brief consensus training session vastly improved agreement among raters. These studies provide more guidance on best practices for obtaining high-quality linkage ratings.

References

  • Aamodt, M. G. (2007). Industrial/organizational psychology: An applied approach (5th ed.). Belmont, CA: Wadsworth. First citation in articleGoogle Scholar

  • Baranowski, L. E. & Anderson, L. E. (2005). Examining rating source variation in work behavior to KSA linkages. Personnel Psychology, 58, 1041–1054. https://doi.org/10.1111/j.1744-6570.2005.00234.x First citation in articleCrossrefGoogle Scholar

  • Baugher, D., Weisbord, E. & Eisner, A. (2011, February). Evaluating training and experience: Do multiple raters or consensus make a difference? Paper presented at the American Society of Business and Behavioral Sciences Annual Conference, Las Vegas, NV. First citation in articleGoogle Scholar

  • Bobko, P., Roth, P. L. & Buster, M. A. (2004). A systematic approach for assessing the currency (“up-to-dateness”) of job analytic information. Public Personnel Management, 37, 261–277. First citation in articleCrossrefGoogle Scholar

  • Dierdorff, E. C. & Morgeson, F. P. (2009). Effects of descriptor specificity and observability on incumbent work analysis ratings. Personnel Psychology, 62, 601–628. https://doi.org/10.1111/j.1744-6570.2009.01151.x First citation in articleCrossrefGoogle Scholar

  • Dunlap, W. P., Burke, M. J. & Smith-Crowe, K. (2003). Accurate tests of statistical significance for rWG and average deviation interrater agreement indexes. Journal of Applied Psychology, 88, 356–362. https://doi.org/10.1037/0021-9010.88.2.356 First citation in articleCrossrefGoogle Scholar

  • Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43, 38290–38315. First citation in articleGoogle Scholar

  • Fleiss, J. L., Levin, B. & Cho Paik, M. (2003). Statistical methods for rates and proportions (3rd ed.). Hoboken, NJ: Wiley. First citation in articleCrossrefGoogle Scholar

  • Gilbert, S., Loignon, A., Hendrickson, C. & Myers, T. (2012, April). Think about the link: Best practices for collecting KSAO-task linkages. Poster presented at 27th meeting of the Society for Industrial-Organizational Psychology, San Diego, CA. First citation in articleGoogle Scholar

  • Harvey, R. & Wilson, M. (2000). Yes Virginia, there is an objective reality in job analysis. Journal of Organizational Behavior, 21, 829–854. https://doi.org/10.1002/1099-1379(200011)21:7<829::AID-JOB30>3.0.CO;2-4 First citation in articleCrossrefGoogle Scholar

  • Hughes, G. L. & Prien, E. P. (1989). Evaluation of task and job skill linkage judgments used to develop test specifications. Personnel Psychology, 42, 283–292. https://doi.org/10.1111/j.1744-6570.1989.tb00658.x First citation in articleCrossrefGoogle Scholar

  • James, L. R., Demaree, R. G. & Wolf, G. (1984). Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology, 69, 85–98. https://doi.org/10.1037/0021-9010.69.1.85 First citation in articleCrossrefGoogle Scholar

  • James, L. R., Demaree, R. G. & Wolf, G. (1993). rwg: An assessment of within-group interrater agreement. Journal of Applied Psychology, 78, 306–309. https://doi.org/10.1037/0021-9010.78.2.306 First citation in articleCrossrefGoogle Scholar

  • Kirkman, B. L., Tesluk, P. E. & Rosen, B. (2001). Assessing the incremental validity of team consensus ratings over aggregation of individual-level data in predicting team effectiveness. Personnel Psychology, 54, 645–667. https://doi.org/10.1111/j.1744-6570.2001.tb00226.x First citation in articleCrossrefGoogle Scholar

  • Melchers, K. G., Lienhardt, N., Von Aarburg, M. & Kleinmann, M. (2011). Is more structure really better? A comparison of frame-of-reference training and descriptively anchored rating scales to improve interviewers’ rating quality. Personnel Psychology, 64, 53–87. https://doi.org/10.1111/j.1744-6570.2010.01202.x First citation in articleCrossrefGoogle Scholar

  • Morgeson, F. P. & Campion, M. A. (1997). Social and cognitive sources of potential inaccuracy in job analysis. Journal of Applied Psychology, 82, 627–655. https://doi.org/10.1037/0021-9010.82.5.627 First citation in articleCrossrefGoogle Scholar

  • Morgeson, F. P., Delaney-Klinger, K., Mayfield, M. S., Ferrara, P. & Campion, M. A. (2004). Self-presentation processes in job analysis: A field experiment investigating inflation in abilities, tasks, and competencies. Journal of Applied Psychology, 89, 674–686. https://doi.org/10.1037/0021-9010.89.4.674 First citation in articleCrossrefGoogle Scholar

  • Myers, T., Hendrickson, C., Gilbert, S., Loignon, A., Norris, D., Matheson, N. & Willis, R. (2011, April). Back to basics: Who should complete KSAO-task linkages? Poster presented at 26th meeting of the Society for Industrial-Organizational Psychology, Chicago, IL. First citation in articleGoogle Scholar

  • Pulakos, E. D., Schmitt, N., Whitney, D. & Smith, M. (1996). Individual differences in interviewer ratings: The impact of standardization, consensus discussion, and sampling error on the validity of a structured interview. Personnel Psychology, 49, 85–102. https://doi.org/10.1111/j.1744-6570.1996.tb01792.x First citation in articleCrossrefGoogle Scholar

  • Pynes, J. & Bernardin, J. H. (1992). Mechanical vs consensus-derived assessment center ratings: A comparison of job performance validities. Public Personnel Management, 21, 18–28. https://doi.org/10.1177/009102609202100102 First citation in articleCrossrefGoogle Scholar

  • Robbins, S. P. & Judge, T. A. (2013). Organizational behavior (15th ed.). Upper Saddle River, NJ: Pearson Education. First citation in articleGoogle Scholar

  • Roch, S. G., Woehr, D. J., Mishra, V. & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. Journal of Occupational and Organizational Psychology, 85, 370–395. https://doi.org/10.1111/j.2044-8325.2011.02045.x First citation in articleCrossrefGoogle Scholar

  • Society for Industrial and Organizational Psychology, Inc. (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author. First citation in articleGoogle Scholar

  • Vinchur, A. J., Prien, E. P. & Schippman, J. S. (1993). An alternative procedure for analyzing job analysis results for content-oriented test development. Journal of Business and Psychology, 8, 215–226. https://doi.org/10.1007/BF02230386 First citation in articleCrossrefGoogle Scholar

  • Woehr, D. J. (1994). Understanding frame-of-reference training: The impact of training on the recall of performance information. Journal of Applied Psychology, 79, 525–534. First citation in articleCrossrefGoogle Scholar