Skip to main content
Log in

Computationally rational agents can be moral agents

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument for computational rationality as an integrative element that effectively combines the philosophical and computational aspects of artificial moral agency. This logically leads to a philosophically coherent and scientifically consistent model for building artificial moral agents. Besides providing a possible answer to the question of how to build artificial moral agents, this model also invites sound debate from multiple disciplines, which should help to advance the field of machine ethics forward.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Sometimes referred to as machine morality, computational morality or artificial morality.

  2. See, for example, the works of Wu and Lin (2018), Arnold et al. (2017), Conitzer et al. (2017) and Yu et al. (2018). Much of these works offer tremendous technical value in building AMA’s, but very little by way of conceptualisation and formulation of an AMA.

  3. Chapter 13 of Russell and Norvig (2009) gives a great introduction to decision theory in computer science.

  4. Can be found in the Nicomachean Ethics Book VI.

  5. “A moral agent is an agent whom one appropriately holds responsible for its actions and their consequences, and moral agency is the distinct type of agency that agent possesses”.

  6. According to Moor (2006), a explicit ethical agent is one that can hold an explicit ethical representation of a given situation, and use that to respond in a manner that is ethical.

  7. The use of the terms top-down and bottom-up in both the cited philosophical and scientific disciplines is conceptually the same. In both cases, top-down means starting from a pre-defined ethical framework or a computational model and bottom-up means learning an ethical representation or a computational model from the available data.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bongani Andy Mabaso.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mabaso, B.A. Computationally rational agents can be moral agents. Ethics Inf Technol 23, 137–145 (2021). https://doi.org/10.1007/s10676-020-09527-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-020-09527-1

Keywords

Navigation