The impact of using algorithms for managerial decisions on public employees' procedural justice

https://doi.org/10.1016/j.giq.2020.101536Get rights and content

Highlights

  • Using algorithms for managerial decisions can affect employees' procedural justice

  • Effects depend on complexity and are largest when algorithms are automated

  • For low complexity practices, including algorithms increases procedural justice

  • For high complexity practices, automating algorithms decreases procedural justice

Abstract

Algorithms are used in public management decisions, for instance, to allocate police staff to potential crime scenes. We study how the usage of algorithms for managerial decisions affects procedural justice as reported by public employees. We argue that some public management practices may be more suitable for algorithmic decision-making than others. We hypothesize that employees' perceptions differ depending on the complexity of the practice at hand. We test this through two survey experiments on 109 Dutch public employees and 126 public employees from the UK. Our results show that when a decision is made by an algorithm for practices that are low in complexity, procedural justice increases. Our results also show that, for practices that are high in complexity, decisions involving a public manager are perceived as higher in procedural justice compared to decisions that were made automatically by computers using algorithms. Nevertheless, adding an algorithm to a public manager's decision-making process can increase procedural justice for high complexity practices. We conclude that managers should explore automation opportunities for low complexity practices, but to be cautious when using algorithms to replace public managers' decisions for high complexity practices. In the latter case, transparency about algorithms and open dialogues on perceptions could be beneficial, but this should not be seen as a panacea.

Introduction

The idea that data can be used to improve decision-making processes in organizations has become more popular (Anastasopoulos and Whitford 2019; Desouza & Jacob, 2017). At the same time, technological developments have allowed more and novel applications of algorithms to be involved in human decision-making processes (Veale & Brass, 2019; Burton, Stein, & Jensen, 2019). On top of that, algorithms have recently been moved up higher in the hierarchy and are becoming decision-making partners or substitutes at the level of leadership (Wesche & Sonderegger, 2019). In other words, algorithms are increasingly being used for managerial decision-making. For instance, some companies, such as Uber, are almost fully substituting managers by algorithms (Wesche & Sonderegger, 2019). Other examples are that personalized nudges based on algorithms have been implemented within organizations to change employees' behavior (The New York Times, 2018), while data mining has been used for the selection and evaluation of employees (Strohmeier & Piazza, 2013).

Novel utilizations of algorithms are also used for managerial decisions in the public sector. Examples include: the calculating of optimal routes for collecting municipal waste (Karadimas, Papatzelou, and Loumos 2007); analyzing which buildings are more likely to catch on fire to guide which fire safety inspections should be prioritized (Engin & Treleaven, 2019); estimating where the chance of criminal behavior is the highest, and subsequently, send police staff to these so called ‘hotspots’ (van Zoonen, 2016); the evaluation of teachers' performance (Diakopoulos, 2014; O'Neill, 2016); and, guiding physicians behavior through nudges based on algorithms in health care (Nagtegaal, Tummers, Noordegraaf, & Bekkers, 2019).

Algorithmic decision-making is, however, far from uncontested (Veale & Brass, 2019; Zarsky, 2016). The debate on the value of algorithms focusses on multiple aspects, including accuracy, power and bias. People can moreover display algorithm aversion – which is a tendency to prefer human decision makers over algorithmic ones (Burton, Stein, & Jensen, 2019). In this paper, we focus on the effect of including algorithms in managerial decisions on procedural justice perceptions as reported by public employees. Procedural justice refers to the extent that the process of decision-making is perceived as being fair (Colquitt, 2001; Lind & Tyler, 1988). Procedural justice contributes to perceptions of legitimacy (Mazerolle et al., 2013). Algorithmic decision-making has been identified as a problem for the legitimacy of decision-making processes (Danaher, 2016) as they are often opaque and might introduce bias (Janssen & Kuk, 2016).

We expect that perceptions of procedural judgment differ depending on the involvement of the public manager and the algorithm, as well as the complexity of the practice at hand. Building on the work of Zouridis et al. (2020), we distinguish three categories of algorithmic public management. Managers can have either full, partial or no discretion. We test the effect of these different forms of algorithmic-manager relationships on public employees' procedural justice. We hypothesize that the perceptions of procedural justice differ according to the extent to which issues are complex (Busch, Henriksen, & Sæbø, 2018; Noordegraaf & Abma, 2003; Veale & Brass, 2019; Zarsky, 2016). We ask the following research question:

How does the inclusion of algorithms in managerial decision-making affect public employees' procedural justice perceptions of public management practices that differ in complexity?

The contribution of our work lies, first, in giving attention to using algorithms for managerial decisions in the public sector. Thus far, most attention has been directed at automating discretion at the frontline (Bovens & Zouridis, 2002; Reddick 2005; Busch & Henriksen, 2018). Using algorithms for managerial decisions is an underexplored concept (Wesche & Sonderegger, 2019), even though key issues in the public sector, such as tension between rule following and discretion, are relevant at the managerial level as well (Maynard-Moody & Musheno, 2000). We also research the ‘middle-ground’, when algorithms serve as a decision-making partners rather than substitutes (Wesche & Sonderegger, 2019). This arrangement might be more realistic as, for instance, in Europe, Article 22 of the General Data Protection Regulation prohibits decision-making based on solely automatic processing (Finck, 2019). Through the inclusion of hybrid forms of decision-making, we extend the research by Lee (2018) on the effects of solely automating decisions in general management.

Second, we connect algorithmic public management to procedural justice. Algorithms can only be used for managerial decision-making if algorithms are perceived as legitimate (Wesche & Sonderegger, 2019). A lack of procedural justice can result in the rejection of using algorithms for certain management practices (Sunshine & Tyler, 2003). Thus, we believe that procedural justice has the potential to partly predict in which direction algorithmic public management will develop. In addition, procedural justice affects organizational variables relating to public employees' performance and well-being, such as job satisfaction, performance and organizational citizenship (Colquitt et al. 2001). As such, we explore the potential that including algorithms in managerial decisions has to make a positive or negative contribution. This connects to the societal responsibility of science to explore potential problems and opportunities in novel technological applications (Ghislieri, Molino, & Cortese, 2018). More generally, our paper contributes to the literature on algorithm aversion and antecedents of procedural justice within the public sector (Burton, Stein, & Jensen, 2019; Logg, Minson, & Moore, 2019). We moreover contribute to research on public values as an important determinant of technology adoption, rather than just focusing on the technical aspects of technology (Lupo, 2019; Twizeyimana & Andersson, 2019).

Third, we use an experimental approach. Experiments are especially valuable for detecting causal relationships (Gerber and Green 2012; Margetts, 2011), because they can account for unobserved confounders by randomization. Earlier research on perceptions within governmental organizations has used qualitative methods to detect complexity as an important factor in public employees' acceptance of discretion reduction (Busch et al., 2018). Our research contributes to testing this claim. We expand our experimental results by qualitatively assessing which aspects of complexity are most salient for public employees when algorithms are being used for managerial decisions.

The article will start by elaborating on algorithms, procedural justice, different types of algorithm-manager interactions and how perceptions are linked to management practices that differ in complexity. Then, we will present our hypotheses and explain our experimental method. Subsequently, we present our results and, finally, we end with a discussion and conclusion.

Section snippets

Algorithms

A technical definition of an algorithm is an ‘abstract mathematical structure that has been implemented into a system for analysis of tasks in a particular analytic domain’ according to Mittelstadt, Allo, Taddeo, Wachter, and Floridi's (2016) adaption of Hill (2016; p.47). This definition consists out of two important elements. First, the algorithm refers to an abstract mathematical structure. Therefore, the algorithm does not imply necessarily the use of techniques, such as machine learning.

Methods

We conducted two studies for this article. Our design builds on Lee's (2018) work, but extends it and specifies it to a public management context. Our groups represented three types of algorithmic-manager interactions based on Zouridis et al.'s (2019) typology of system-, screen- and street-level bureaucracy. We based our public management scenarios on real-life algorithmic management. Study 1 was pre-registered under. https://osf.io/xmzr8/:. In this between-subjects study, we researched

Quantitative analyses

For study 1, the randomization check shows that our descriptive conditions are distributed equally among groups. All descriptives per group are shown in Appendix C (Table C.1). The manipulation check indicates that the manipulation was successful (Χ2 (4, N = 100) = 127.38 p = 0.00). Exclusion of those who failed the manipulation check leads to similar results. Our mixed ANOVA presents a significant interaction effect of complexity and the different types of decision-making (F(2, 106) = 44.09, p

Discussion

Algorithms are increasingly applied in public management decision-making. However, this could be problematic for the legitimacy and acceptance of decisions. This paper sought to answer the question: How does the inclusion of algorithms in managerial decision-making affect public employees' procedural justice perceptions of public management practices that differ in complexity?

Our results have two main implications. First, public employees' procedural justice changes most when algorithms are

Limitations

Our first limitations relate to the design. The scenarios varied on practice complexity. However, complexity consists of a technical and a normative dimension. We treated complexity as one construct in this article. As such, our experimental research did not allow us to assess the causal effect of the individual dimensions. Future research should study these dimensions separately. In addition, some respondents indicated that, for them, a managers' judgment implied that the manager did not

Author statement

This work is single authored by Rosanna Nagtegaal.

Declarations of interest

None

Acknowledgements

I would like to thank my supervisors and colleagues at the Utrecht School of Governance, participants of NIG 2019 and EGPA 2019, as well as two anonymous reviewers for their insightful feedback.

Rosanna Nagtegaal is a PhD candidate at the Utrecht School of Governance (USG), Utrecht University, the Netherlands. In her research, she studies behavioral public administration (BPA) in order to analyze the (changing) behavior of public sector employees, especially in digitalizing environments. She most often uses experimental methodology.

References (77)

  • J.D. Twizeyimana et al.

    The public value of E-government – A literature review

    Government Information Quarterly

    (2019)
  • J.S. Wesche et al.

    When computers take the Lead: The automation of leadership

    Computers in Human Behavior

    (2019)
  • L. van Zoonen

    Privacy concerns in smart cities

    Government Information Quarterly

    (2016)
  • M. Ananny et al.

    Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability

    New Media & Society

    (2018)
  • L.J. Anastasopoulos et al.

    Machine learning for public administration research, with application to organizational reputation

    Journal of Public Administration Research and Theory

    (2019)
  • D. Beer

    The social power of algorithms

    Information, Communication & Society

    (2017)
  • R. Binns et al.

    “It’s reducing a human being to a percentage”; perceptions of justice in algorithmic decisions

  • M. Bovens et al.

    From street-level to system-level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control

    Public Administration Review

    (2002)
  • J.W. Burton et al.

    A Systematic Review of Algorithm Aversion in Augmented Decision Making

    Journal of Behavioral Decision Making

    (2019)
  • P.A. Busch et al.

    Digital Discretion: A Systematic Literature Review of ICT and Street-Level Discretion

    Information Polity

    (2018)
  • D.K. Citron et al.

    The scored society: Due process for automated predictions

    Washington Law Review

    (2014)
  • J.A. Colquitt

    On the dimensionality of organizational justice: A construct validation of a measure

    Journal of Applied Psychology

    (2001)
  • J.A. Colquitt et al.

    Justice at the millennium: A meta-analytic review of 25 years of organizational justice research

    Journal of Applied Psychology

    (2001)
  • K. Crawford et al.

    Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms

    Boston College Law Review

    (2014)
  • J. Danaher

    The threat of Algocracy: Reality, resistance and accommodation

    Philosophy and Technology

    (2016)
  • K.C. Desouza et al.

    Big data in the public sector: Lessons for practitioners and scholars 555751A

    Administration & Society

    (2017)
  • N. Diakopoulos

    Algorithmic Accountability Reporting: On the Investigation of Black Boxes

    (2014)
  • Z. Engin et al.

    Algorithmic government: Automating public services and supporting civil servants in using data science technologies

    The Computer Journal

    (2019)
  • T. Evans et al.

    Discretion and the Quest for Controlled Freedom

    (2019)
  • M. Finck

    Smart contracts as a form of solely automated processing under the GDPR

    International Data Privacy Law

    (2019)
  • K.P. Fund

    About the local government pension scheme (LGPS)

    Kent County Council.

    (2020)
  • A.S. Gerber et al.

    Field experiments: Design, analysis, and interpretation

    (2012)
  • C. Ghislieri et al.

    Work and organizational psychology looks at the fourth industrial revolution: How to support workers and organizations?

    Frontiers in Psychology

    (2018)
  • D.L. Goodhue et al.

    Task-technology fit and individual performance

    MIS Quarterly.

    (1995)
  • J. Greenberg et al.

    Handbook of organizational justice

    (2005)
  • W.M. Grove et al.

    Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy

    Psychology, Public Policy, and Law

    (1996)
  • R.K. Hill

    What an algorithm is

    Philosophy & Technology

    (2016)
  • J.H. Holland

    Complexity: A very short introduction

    (2014)
  • Cited by (55)

    • AI on the street: Context-dependent responses to artificial intelligence

      2024, International Journal of Research in Marketing
    View all citing articles on Scopus

    Rosanna Nagtegaal is a PhD candidate at the Utrecht School of Governance (USG), Utrecht University, the Netherlands. In her research, she studies behavioral public administration (BPA) in order to analyze the (changing) behavior of public sector employees, especially in digitalizing environments. She most often uses experimental methodology.

    1

    Full postal address: Bijlhouwerstraat 6, 3511 ZC Utrecht, The Netherlands

    View full text