Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making

https://doi.org/10.1016/j.clsr.2020.105456Get rights and content

Abstract

The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems.

The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers.

A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.

Introduction

AI is moving into the very essence of what constitutes us as a democratic society: the way we make decisions. Automated decision-making (ADM) systems are replacing human decision-makers in a whole range of areas, from governments and court rooms, to HR departments, financial institutions, media and politics. The ongoing integration of ADM into the bodies and institutions that take decisions in our society has triggered an intense debate among academics, policymakers and civil society about the conditions under which ADM is acceptable or unacceptable, the opportunities and, perhaps even more so, the risks that ADM poses, and also how we can ensure that ADM systems respect the fundamental values that characterize our society.1

One such value is fairness. A notoriously difficult concept to define, automation and the algorithmic turn force us to revisit fairness and its meaning in the context of ADM. Different disciplines, from computer science to political philosophy and law, have begun to reconceptualize fairness to determine the potential but also the limits of integrating ADM into society. The importance of algorithmic fairness has been reemphasized in a range of high-level policy documents and ethical guidelines,2 and the F from Fairness is a constitutive part of FACT and FAIR in the realm of Responsible Data Science, and more recently, the Responsible AI movement. Ultimately, the ongoing discussion about algorithmic fairness is a deep debate about whether we, as a society, are willing to accept ADM as a legitimate form of decision-making and, if so, under which conditions.

Thus far, this societal debate has been primarily led and informed by academics, civil society, technology developers and members of the many expert groups tasked to develop ethical guidelines for ADM, a fact that is not unreasonable given the immense complexity of the issue. Ultimately, however, ADM affects citizens. They will have to live with, act on and accept the authority of ADM systems. Any claim to the legitimacy of automated decisions will have to be recognized by them and algorithmic fairness accepted as justice. Citizens’ perceptions and assumptions regarding fairness in ADM in contemporary societies, in general, and their expectations regarding ADM, in particular, remain critically understudied. The goal of this paper is therefore threefold: (1) to gain an initial understanding of citizens’ intuitive perceptions of ADM fairness, and why and under which conditions people are inclined to perceive and accept ADM as fair or unfair in comparison to human decision-makers, (2) to ascertain to what extent the principles and concerns that citizens consider decisive in their judgement of ADM as fair or not corresponds with conceptions of fairness currently found in the academic literature, or whether we may have overlooked certain critical aspects, and (3) explore the extent to which individual characteristics such as age, gender and education levels influence such perceptions. In summary, we propose the following research question: How do citizens perceive the potential fairness of ADM systems in comparison to human decision-makers; what principles lie behind their evaluations of fairness; and to what extent do individual characteristics influence such evaluations?

In addressing this question, this article contributes to the current literature on fairness in ADM by adding empirically grounded insights into the perceptions of citizens as potential subjects of ADM. This includes their articulations of aspects related to procedural fairness in particular. Much of the current literature still focuses on aspects of substantive fairness (such as the lack of bias or respect for individual rights such as the right to privacy or non-discrimination) and the process of ADM itself. Because of the focus on people's perceptions, we are able to provide a more nuanced understanding of fairness, showing that people appreciate different aspects of fairness in ADM and value both modes of decision-making for different reasons, or in combination. Finally, the article seeks to broaden the debate around fairness in ADM by demonstrating that fairness does not automatically translate into justice, and that when implementing ADM in professional decision-making processes care must be taken not only regarding the fairness of the decisions themselves, but that automated judgements must be rendered in a way that respects human dignity and the potential need for human interaction. In other words, we argue that there is also an emotive or relational dimension of fairness that needs to be considered when implementing ADM in any decision-making process. For the purpose of this research, we approached ADM broadly in the sense of decision making in a professional capacity without referring to a particular sector. However, when elaborating on the qualities that could be expected in a (human) professional decision maker we borrowed from the literature on decision making in the judicial sector, as this is one of the proto-types of professional decision making, and an area where the question of what characterizes fairness in decision makers and decision making has been subject to extensive research, exactly because of the linkage between justice and fairness.

In the following, we will briefly outline the current discussion in the academic literature, before describing the methods and empirical findings of our research. We will conclude with a discussion of our findings and reflections for further research.

Section snippets

Decision-making, justice and society

Disagreement is a necessary characteristic of pluralist democratic societies, as is the existence of institutions and procedures to resolve them. This is done on the basis of rules and legal standards that reflect the central values and perceptions of justice and fairness in a society.3 According to Bellamy, ‘democracy embodies the “right to have rights” of citizens—it offers the mechanism through

Sample

To investigate public perceptions and assumptions regarding ADM fairness, we conducted a survey of a nationally representative sample of the Dutch adult population (18 years or older). Participants were recruited from a public opinion research company's database, which has over 115,000 registered respondents. The survey is part of a larger project investigating ADM by AI. We began by inviting a random sample, reflective of the national population for age, gender, region and educational level,

Who is the fairest of them all?

The largest share of the respondents indicated that AI would be fairer than a human in making decisions (see Table 1), in response to RQ1. Some answers, however, were more nuanced, suggesting that fairness would depend on the circumstances; or that both AI and humans are equally fair/unfair; or that they should work together.

Conditions for fairness

In addition to indicating which kind of decision-maker they considered to be fairer, respondents also elaborated on the conditions under which, or reasons why, humans or AI

Discussion

We now turn to a reflection on some of the key findings of this research. This is not to say that we believe that citizens should or could be the ultimate arbiters in deciding whether and, if so, how the integration of ADM into decision-making procedures can incorporate ethical, moral or legal conceptions of fairness. Indeed, as the results of our empirical inquiry illustrate, people's judgements are often clouded by a range of important misconceptions about technology. Nevertheless, as we will

Declaration of Competing Interest

There are no competing interests to declare.

Acknowledgment

This research was supported by the Research Priority Areas Information & Communication in the Data Society (https://www.uva-icds.net/), and Communication and its Digital Communication Methods Lab (digicomlab.eu) at the University of Amsterdam. The funder had no influence on the research, research design or execution.

References (57)

  • M. Butterworth

    The ICO and artificial intelligence: the role of fairness in the GDPR framework

    Comput. Law Secur. Rev.

    (2018)
  • C. Prins

    Digital justice

    Comput. Law Secur. Review

    (2018)
  • B. Alarie et al.

    How artificial intelligence will affect the practice of law

    Univ. Toronto Law J.

    (2018)
  • N. Aletras et al.

    Predicting judicial decisions of the European court of human rights: a natural language processing perspective

    PeerJ Comput. Sci.

    (2016)
  • Automating society: taking stock of automated decision-making in the EU

    AlgorithmWatch

    (2019)
  • Ch. Angelopoulos

    MTE v Hungary : A new ECtHR judgement on intermediary liability and freedom of expression

    J. Intell. Prop. Law Pract.

    (2016)
  • Baleis, J., Keller, B., Starke, C., & Marcinkowski, F.Cognitive and emotional response to fairness in AI – a systematic...
  • Barna, L., D. Juhász, and S. Márok. What makes a good judge’. Budapest: European Judicial Training Network Themis...
  • S. Barocas et al.

    Big data's disparate impact

    Calif. L. Rev.

    (2016)
  • R. Bellamy

    The democratic qualities of courts: a critical analysis of three arguments

    Representation

    (2013)
  • R. Binns

    Data protection impact assessments: a meta-regulatory approach

    Int. Data Priv. Law

    (2017)
  • R. Binns et al.

    It's reducing a human being to a percentage’; perceptions of justice in algorithmic decisions

  • L. Blum

    Moral perception and particularity

    Ethics

    (1991)
  • D.K. Citron

    Technological due process

    Washington Univ. Law Rev.

    (2008)
  • D. Clifford et al.

    Data protection and the role of fairness

    Yearb. Eur. Law

    (2018)
  • R. Cranston

    What do courts do

    Civ. Just. Q.

    (1986)
  • R.M. Dawes et al.

    Clinical versus actuarial judgement

    Science

    (1989)
  • B.J. Dietvorst et al.

    Algorithm aversion: people erroneously avoid algorithms after seeing them Err

    J. Exp. Psychol.: Gen.

    (2015)
  • B.J. Dietvorst et al.

    Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them

    Management Science

    (2018)
  • J.J. Dijkstra et al.

    Persuasiveness of expert systems

    Behav. Inf. Technol.

    (1998)
  • I.V. Domselaar

    Moral quality in adjudication: on judicial virtues and civic friendship

    Netherlands J. Legal Philos.

    (2015)
  • C. Dwork et al.

    Fairness through awareness

  • A. Feldman et al.

    What motivates the justices: utilizing automated text classification to determine supreme court justices’ preferences’

  • N.K. Gale et al.

    Using the framework method for the analysis of qualitative data in multi-disciplinary health research

    BMC Med. Res. Methodol.

    (2013)
  • B. Goodman et al.

    European Union regulations on algorithmic decision-making and a ‘right to explanation

    AI Mag.

    (2017)
  • H. Heidari et al.

    A moral framework for understanding fair ML through economic models of equality of opportunity

  • K.A. Hoff et al.

    Trust in automation: integrating empirical evidence on factors that influence trust

    Hum.Fact.: J. Hum. Fact. Ergonom. Society

    (May 2015)
  • F. Kamiran et al.

    Techniques for discrimination-free predictive models

    Discrimination and Privacy in the Information Society

    (2013)
  • Cited by (58)

    • Ethics and artificial intelligence: A theoretical framework for ethical decision making in the digital era

      2024, Digital Technologies, Ethics, and Decentralization in the Digital Era
    View all citing articles on Scopus
    View full text