Skip to main content
Log in

Analyzing the impact of reputational bias on global university rankings based on objective research performance data: the case of the Shanghai Ranking (ARWU)

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

This paper analyzes the potential impact of reputational bias on academic classifications based upon current research performance measures. Empirical evidence supports the existence of a noticeable effect of reputational bias in the Academic Ranking of World Universities (ARWU), informally known as the Shanghai Ranking. Despite using reliable, objective data, the results of the ranking are partially contaminated by the halo effect that pervades peer review processes and citation practices behind two of ARWU’s main indicators: highly cited researchers (HiCi), and papers published in Nature and Science (N&S). ARWU results may contribute to reinforcing reputational bias through a vicious feedback loop. The study describes a method for quantifying the bias present in the aforementioned indicators. Our findings reveal that bias exists in N&S but not in HiCi, and that while it benefits top universities and Japanese universities, it stifles Australian counterparts. The paper paves the way for a new, interesting debate that will require future research to identify and cope with shortcomings derived from the impact of reputational bias in journal procedures both when evaluating and accepting manuscripts, as well as in the methodology of global university rankings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. This bias can occur at the level of research groups, academic departments, labs, and centers. An author affiliated with, for example, a highly prestigious group, can take advantage of the bias influencing the editor, who, because of the halo effect, will value the author more if she or he knows that the author is part of that group. If this happens, the university will benefit, and so in practice, it does not matter whether the affiliation bias is caused by the university’s reputation or by a part of it.

  2. Any bibliometric indicator that counts papers or citations is potentially affected by reputational bias. We have focused our analysis on the ARWU components that are potentially most affected by reputational bias. This does not mean that the PUB indicator, which is another ARWU component based on papers indexed in the Science Citation Index-Expanded and Social Science Citation Index, does not also suffer from the bias. This bias is, however, more difficult to capture because a large part of the papers considered in this indicator are published in low-prestige journals, which are not considered by the more prestigious institutions that generate reputational bias.

  3. Roberts and Dowling (2002) obtain the ‘residual reputation’ (our PR variable) from the regression’s residuals of TR against four current performance variables, with adjusted R2 = .15. We have chosen a simpler method due to the non-normality of our TR variable and because we have only one current performance variable, which allows us to work with a simple subtraction. However, we obtained the regression’s TR residuals (log-transformed) against GlobalGRAS (adjusted R2 = .21), and since its correlation with the PR calculated by subtraction is very high (r = .86), it can be considered as an alternative method.

  4. See www.shanghairanking.com/Shanghairanking-Subject-Rankings/index.html.

  5. A table showing the correspondence can be found here: www.shanghairanking.com/Shanghairanking-Subject-Rankings/attachment/Mapping_between_Web_of_Science_categories_and_54_academic_subjects.pdf.

  6.  www.shanghairanking.com/Shanghairanking-Subject-Rankings/Methodology-for-ShanghaiRanking-Global-Ranking-of-Academic-Subjects-2018.html.

  7. http://www.shanghairanking.com/subject-survey/survey-results-2018.html.

  8. http://www.shanghairanking.com/subject-survey/awards.html.

References

  • Alessandri, S. W., Yang, S. U., & Kinsey, D. F. (2006). An integrative approach to university visual identity and reputation. Corporate Reputation Review, 9(4), 258–270.

    Google Scholar 

  • Armstrong, J. S., & Sperry, T. (1994). Business school prestige—Research versus teaching. Interfaces, 24(2), 13–43.

    Google Scholar 

  • Bastedo, M. N., & Bowman, N. A. (2010). The U.S. News and World Report college rankings: Modeling institutional effects on organizational reputation. American Journal of Education, 116, 163–184.

    Google Scholar 

  • Becker, G. S. (1957). The economics of discrimination. Chicago: University of Chicago Press.

    Google Scholar 

  • Beghin, J., & Park, B. (2019). The exports of higher education services from OECD countries to Asian countries. A gravity approach. Economics working papers: Department of Economics, Iowa State University. 19015.lib.dr.iastate.edu/econ_workingpapers/80.

  • Bentley, R., & Blackburn, R. (1990). Changes in academic research performance over time: A study of institutional accumulative advantage. Research in Higher Education, 31(4), 327–353.

    Google Scholar 

  • Bogocz, J., Bak, A., & Polanski, J. (2014). No free lunches in nature? An analysis of the regional distribution of the affiliations of Nature publications. Scientometrics, 101(1), 547–568.

    Google Scholar 

  • Bornmann, L. (2011). Peer Review and Bibliometric: Potentials and Problems. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University Rankings, The Changing Academy: The Changing Academic Profession in International Comparative Perspective (Vol. 3). Dordrecht: Springer Science.

    Google Scholar 

  • Bowman, N. A., & Bastedo, M. N. (2011). Anchoring effects in world university rankings: exploring biases in reputation scores. High Education, 61, 431–444.

    Google Scholar 

  • Brown, B., & Perry, S. (1994). Removing the financial performance halo from Fortune’s “most admired” companies. Academy of Management Journal, 37(5), 1347–1359.

    Google Scholar 

  • Brown, B., & Perry, S. (1995). Focal Paper: Halo-Removed Residuals of Fortune’s Responsibility to the Community and Environment—A Decade of Data. Business and Society, 34(2), 199–215.

    Google Scholar 

  • Cole, S., & Cole, J. R. (1967). Scientific output and recognition: A study in the operation of the reward system in science. American Sociological Review, 32(3), 377–390.

    Google Scholar 

  • Daraio, C., Bonaccorsi, A., & Simar, L. (2015). Rankings and university performance: A conditional multidimensional approach. European Journal of Operational Research, 244(3), 918–930.

    MATH  Google Scholar 

  • Dehon, C., McCathie, A., & Verardi, V. (2010). Uncovering excellence in academic rankings: A closer look at the Shanghai ranking. Scientometrics, 83(2), 515–524.

    Google Scholar 

  • Dey, E. L., Milem, J. F., & Berger, J. B. (1997). Changing patterns of publication productivity: Accumulative advantage or institutional isomorphism? Sociology of Education, 70(4), 308–323.

    Google Scholar 

  • Docampo, D. (2013). Reproducibility of the Shanghai academic ranking of world universities results. Scientometrics, 94(2), 567–587.

    Google Scholar 

  • Docampo, D., & Cram, L. (2015). On the effects of institutional size in university classifications: the case of the Shanghai ranking. Scientometrics, 102(2), 1325–1346.

    Google Scholar 

  • Ellis, L. V., & Durden, G. C. (1991). Why economists rank their journals the way they do. Journal of Economics and Business, 43, 265–270.

    Google Scholar 

  • ERA. (2019). The State of Australian University Research 201819: ERA National Report. https://dataportal.arc.gov.au/ERA/NationalReport/2018 (February 5, 2020).

  • Fowles, J., Frederickson, H. G., & Koppell, J. G. (2016). University rankings: Evidence and a conceptual framework. Public Administration Review, 76(5), 790–803.

    Google Scholar 

  • Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (1998). Multivariate data analysis (Vol. 5, No. 3, pp. 207–219). Upper Saddle River, NJ: Prentice hall.

  • Hazelkorn, E. (Ed.). (2016). Global rankings and the geopolitics of higher education: Understanding the infuence and impact of rankings on higher education, policy and society. Abingdon: Taylor & Francis.

    Google Scholar 

  • Hazelkorn, E. (2017). Rankings and higher education: Reframing relationships within and between states. Centre for Global Higher Education working paper no. WC1H 0AL. London, the UK: UCL Institute of Education.

  • Helm, S. (2007). One reputation or many? Comparing stakeholders’ perceptions of corporate reputation. Corporate Communications: An International Journal, 12(3), 238–254.

    Google Scholar 

  • Hertig, H. P. (2016). Universities, rankings and the dynamics of global higher education: Perspectives from Asia. Europe and North America: Springer.

    Google Scholar 

  • Kauppi, N. (2018). The global ranking game: Narrowing academic excellence through numerical objectification. Studies in Higher Education, 43(10), 1750–1762.

    Google Scholar 

  • Keith, B. (1999). The institutional context of departmental prestige in American higher education. American Educational Research Journal, 39(3), 409–445.

    Google Scholar 

  • Keith, B. (2001). Organizational contexts and university performance outcomes: the limited role of purposive action in the management of institutional status. Research in Higher Education, 42(5), 493–516.

    Google Scholar 

  • Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17.

    Google Scholar 

  • Leiter, B. (2006). Commentary: How to rank law schools. Indiana Law Journal, 81, 47–52.

    Google Scholar 

  • Li, M., Shankar, S., & Tang, K. K. (2011a). Why does the USA dominate university league tables? Studies in Higher Education, 36(8), 923–937.

    Google Scholar 

  • Li, M., Shankar, S., & Tang, K. K. (2011b). Catching up with Harvard: Results from regression analysis of world universities league tables. Cambridge journal of education, 41(2), 121–137.

    Google Scholar 

  • Linton, J. D., Tierney, R., & Walsh, S. T. (2011). Publish or perish: How are research and reputation related? Serials Review, 37(4), 244–257.

    Google Scholar 

  • Lowry, R. C., & Silver, B. D. (1996). A rising tide lifts all boats: Political science department reputation and the reputation of the university. PS: Political Science & Politics, 29(2), 161–167.

  • Medoff, M. H. (2006). Evidence of a Harvard and Chicago Matthew effect. Journal of Economic Methodology, 13(4), 485–506.

    Google Scholar 

  • Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159(3810), 56–63.

    Google Scholar 

  • Mingers, J., & Xu, F. (2010). The drivers of citations in management science journals. European Journal of Operational Research, 205(2), 422–430.

    MATH  Google Scholar 

  • Morgeson, F. P., & Nahrgang, J. D. (2008). Same as it ever was: Recognizing stability in the Business Week rankings. Academy of Management Learning & Education, 7, 26–41.

    Google Scholar 

  • Norton, A., Cherastidtham, I., & Mackey, W. (2018). Mapping Australian higher education 2018. Grattan Institute. ISBN: 978-0-6483311-2-4. https://grattan.edu.au/wp-content/uploads/2018/09/907-Mapping-Australian-higher-education-2018.pdf (February 1, 2020).

  • Pang, L. (2018). How Tsinghua Became a World Class Research University: A Case Study on the Impact of Rankings on a Chinese Higher Education Institution (Doctoral dissertation).

  • Petersen, A. M. (2019). Megajournal mismanagement: manuscript decision bias and anomalous editor activity at plos one. Journal of Informetrics, 13(4), 1–23.

    Google Scholar 

  • Phillips, N. (2017a). Japanese research leaders warn about national science decline. NATURE News, 17 October 2017.

  • Phillips, N. (2017b). Striving for a research renaissance. Nature, 543(7646), S7.

    Google Scholar 

  • Piro, F. N., & Sivertsen, G. (2016). How can differences in international university rankings be explained? Scientometrics, 109(3), 2263–2278.

    Google Scholar 

  • Rauhvargers, A. (2011). Report on rankings 2011: Global university rankings and their impact. Brussels: European University Association.

    Google Scholar 

  • Rindova, V. P., Williamson, I. O., Petkova, A. P., & Sever, J. M. (2005). Being good or being known: an empirical examination of the dimensions, antecedents, and consequences of organizational reputation. Academy of Management Journal, 48(6), 1033–1049.

    Google Scholar 

  • Roberts, P. W., & Dowling, G. R. (2002). Corporate reputation and sustained superior financial performance. Strategic Management Journal, 23(12), 1077–1093.

    Google Scholar 

  • Safón, V. (2012). Can the reputation of an established business school change? Management in Education, 26(4), 169–180.

    Google Scholar 

  • Safón, V. (2013). What do global university rankings really measure? The search for the X factor and the X entity. Scientometrics, 97(2), 223–244.

    Google Scholar 

  • Safón, V. (2019). Inter-ranking reputational effects: an analysis of the Academic Ranking of World Universities (ARWU) and the Times Higher Education World University Rankings (THE) reputational relationship. Scientometrics, 121(2), 897–915.

    Google Scholar 

  • Selten, F., Neylon, C., Huang, C. K., & Groth, P. (2019). A Longitudinal Analysis of University Rankings. arXiv preprint arXiv:1908.10632.

  • Shapiro, C. (1983). Premiums for high quality products as returns to reputations. The Quarterly Journal of Economics, 98(4), 659–679.

    Google Scholar 

  • Shin, J. C. (2011). Organizational Effectiveness and University Rankings. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University Rankings, The Changing Academy: The Changing Academic Profession in International Comparative Perspective (Vol. 3). Dordrecht: Springer Science.

    Google Scholar 

  • Shin, J. C., & Toutkoushian, R. K. (2011). The past, present, and future of University Rankings. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University Rankings, The Changing Academy: The Changing Academic Profession in International Comparative Perspective (Vol. 3). Dordrecht: Springer Science.

    Google Scholar 

  • Smyth, D. J. (1999). The determinants of the reputations of economics departments: Pages published, citations and the Andy Rooney effect. American Economist, 43(2), 49–58.

    Google Scholar 

  • Sorz, J., Wallner, B., Seidler, H., & Fieder, M. (2015). Inconsistent year-to-year fluctuations limit the conclusiveness of global higher education rankings for university management. PeerJ, 3(e1217), 1–14.

    Google Scholar 

  • Spence, M. (1973). Job market signaling. Quarterly Journal of Economics, 87(3), 355–374.

    Google Scholar 

  • Stake, J. E. (2006). The interplay between law school rankings, reputations, and resource allocation: Ways rankings mislead. Indiana Law Journal, 81, 229–270.

    Google Scholar 

  • Stella, A. & Woodhouse, D. (2006). Ranking of higher education institutions. AUQA Occasional Publications Number 6. Australian Universities Quality Agency.

  • Teichler, U. (2011). Social contexts and systemic consequence of University Rankings: A meta-analysis of the ranking literature. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University Rankings, The Changing Academy: The Changing Academic Profession in International Comparative Perspective (Vol. 3). Dordrecht: Springer Science.

    Google Scholar 

  • THE. (2019). THE World Reputation Rankings 2019 methodology. https://www.timeshighereducation.com/world-university-rankings/world-reputation-rankings-2019-methodology. 26 December 2019.

  • Tōru, N. (2017). The Decline and Fall of Japanese Science. Science Technology: Nippon.com Scientific Research.

    Google Scholar 

  • Toutkoushian, R. K., Dundar, H., & Becker, W. E. (1998). The National Research Council Graduate Program Ratings: what are they measuring? Review of Higher Education, 21, 427–443.

    Google Scholar 

  • Toutkoushian, R. K., & Webber, K. (2011). Measuring the Research Performance of Postsecondary Institutions. In J. C. Shin, R. K. Toutkoushian & U. Teichler (Eds.), University Rankings, The Changing Academy: The Changing Academic Profession in International Comparative Perspective (Vol. 3). Dordrecht: Springer Science.

  • Treadwell, D. F., & Harrison, T. M. (1994). Conceptualizing and assessing organizational image: Model images, commitment, and communication. Communications Monographs, 61(1), 63–85.

    Google Scholar 

  • Van Den Besselaar, P., Heyman, U., & Sandström, U. (2017). Perverse effects of output-based research funding? Butler’s Australian case revisited. Journal of Informetrics, 11(3), 905–918.

    Google Scholar 

  • Van Dyke, N. (2008). Self-and Peer-Assessment Disparities in University Ranking Schemes. Higher Education in Europe, 33(2–3), 285–293.

    Google Scholar 

  • Van Raan, A. F. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.

    Google Scholar 

  • Vogel, R., Hattke, F., & Petersen, J. (2017). Journal rankings in management and business studies: What rules do we play by? Research Policy, 46(10), 1707–1722.

    Google Scholar 

  • Volkwein, J. F., & Sweitzer, K. V. (2006). Institutional prestige and reputation among research universities and liberal arts colleges. Research in Higher Education, 47(2), 129–148.

    Google Scholar 

  • Zitt, M., & Filliatreau, G. (2007). The world class universities and ranking: Aiming beyond status (pp. 141–160), Romania: UNESCO-CEPES, Cluj University Press, chap Big is (made) beautiful: Some comments about the Shangai ranking of world-class universities, Part Two.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vicente Safón.

Appendix: Methodology of the ARWU subject rankings (ARWUGRAS)

Appendix: Methodology of the ARWU subject rankings (ARWUGRAS)

ARWU started publishing a large set of subject rankings (52) in 2016,Footnote 4 a number that grew to 54 in 2018. The rankings corresponding to the 2019 edition are mainly based upon bibliometric results from the period 2013–2017, aggregated through a weighted combination of the following indicators:

  • PUB: Publications (article-type) in the Web of Science categories relevant for each subject.Footnote 5 For an institution to be considered, ARWUGRAS sets a publication threshold on each subject, as explained in the ranking methodology.Footnote 6

  • CNCI: average citations per article of those publications normalized by the Web of Science category and publication year.

  • IC: Percentage of international collaborations in those publications.

  • TOP: Publications (article type) in a list of key journals in 40 of the subjects elaborated after a relatively scant survey of academics worldwideFootnote 7; otherwise, the list is a little longer, comprising the journals among the 20% top journals in the 2017 JCR edition.

  • AWD: Relevant international awards (according to a survey carried out by the ranking makersFootnote 8) won by staff since 1981 in a list of 28 international prizes, corresponding to 24 of the 54 subjects. The indicator is not used in the remaining 30 subjects.

Weights on the 5 (4 in most of the subjects) indicators to compose the final score vary from subject to subject. Except the award indicator, there are two sets of weights as Table 7 shows: Social Sciences, only for the 14 subjects included in that research area, and standard, for the remaining 40 subjects.

Table 7 Indicator weights, ARWUGRAS 2019

The AWD indicator, when used, receives a weight of 100, except for a few subjects (4), where the weight is only 20, which makes the indicator not particularly relevant for the final score in 34 of the 54 subjects.

Computation of scores in ARWUGRAS

Up to 2018, for any indicator, let r be the value of the raw data for a specific institution, rmax the value of the raw data of the best performing institution, and s the score of the institution, then (as in Docampo 2013):

$$s = 100\sqrt {\frac{r}{r\hbox{max} }}$$

In THE (2019) edition, a cap was placed on the computation of the CNCI indicator. It is computed now as follows:

  1. 1.

    Calculate rmax for CNCI as the minimum among the value of the raw data of the best performing institution and the average of all the raw scores on the list.

  2. 2.

    If the raw score of an institution is larger than rmax, the final score is 100. Otherwise, the score is computed as in the above formula.

Size-dependence of ARWUGRAS

Compared to the World University Ranking ARWU, ARWUGRAS shows a lesser size dependence due to the choice of two purely size-independent indicators, CNCI and IC. The PUB indicator is clearly size-dependent. With regard to the other two indicators, TOP is size-dependent, as is the number (or the share) of publications in top journals that are counted. As for the AWD indicator, it is difficult to characterize it as size-dependent, since small elite institutions may obtain extremely high scores in AWD, something that is highly unlikely to happen with the indicator TOP. In short, excluding AWD from the discussion, in four of the big ARWUGRAS areas (i.e., all but the Social Sciences), size-independent indicators amount to 37.5% of the score. Due to the different sets of weights used in Social Sciences, the share of size-independent indicators is reduced to 19.5%.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Safón, V., Docampo, D. Analyzing the impact of reputational bias on global university rankings based on objective research performance data: the case of the Shanghai Ranking (ARWU). Scientometrics 125, 2199–2227 (2020). https://doi.org/10.1007/s11192-020-03722-z

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03722-z

Keywords

Mathematical subject classification

JEL Classification

Navigation