Skip to main content
Log in

Synthetic Deliberation: Can Emulated Imagination Enhance Machine Ethics?

  • General Article
  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Artificial intelligence is becoming increasingly entwined with our daily lives: AIs work as assistants through our phones, control our vehicles, and navigate our vacuums. As these objects become more complex and work within our societies in ways that affect our well-being, there is a growing demand for machine ethics—we want a guarantee that the various automata in our lives will behave in a way that minimizes the amount of harm they create. Though many technologies exist as moral artifacts (and perhaps moral agents), the development of a truly ethical AI system is highly contentious; theorists have proposed and critiqued countless possibilities for programming these agents to become ethical. Many of these arguments, however, presuppose the possibility that an artificially intelligent system can actually be ethical. In this essay, I will explore a potential path to AI ethics by considering the role of imagination in the deliberative process via the work of John Dewey and his interpreters, showcasing one form of reinforcement learning that mimics imaginative deliberation. With these components in place, I contend that such an artificial agent is capable of something very near ethical behavior—close enough that we may consider it so.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The researchers behind the Moral Machine project have acknowledged some of its shortcomings regarding representation but have maintained the relevance of their findings as well as the need for a form of community-driven ethical valuing (Awad et al. 2018).

  2. Aside from the typical standing questions of race, gender, socio-economic status, etc., we might also want to question what generation gets a say; future humans might suffer the consequences of the agent’s actions (Baum 2017). As an example, we might think about carbon emissions in autonomous vehicles; should it choose the longer fuel-efficient route or the faster route to minimize the suffering of present-day humans?

  3. This hypothetical system suggests that artifacts are capable of agency, this is not the ultimate claim of my paper. I will address this point later, but it is important to note that for Tonkens’ argument, the artificially intelligent system would need to be an agent for his argument to remain sound.

  4. It is possible that there is room within Kantian literature to support an alternative explanation for this missing link that I define as imagination. Perhaps this takes the form a synthetic a priori capability within AI architecture. See Sloman (2018) for arguments that oppose the possibility of a synthetic a priori machine.

  5. It is worth noting that in many of our personal discussions about Amazon’s Echo smart home assistants, we tend to refer to the device as its human persona, “Alexa”. This in itself helps explain why we tend to imbue the technology with agency.

  6. It is worth noting that the environment in Deweyan literature refers to a plethora of stimuli and other agents/actors. As such, any moral situation is inherently social in nature. Moreover, the agent within the environment is often referred to as the organism in Deweyan literature, but given the nature of this work, to refer to an acting individual as an organism is unsuitable.

  7. Habits are both defined and policed by one’s particular culture, and in this sense, habits are not exclusive to the individual; some are commonly shared proximally with others in the social strata creating what Dewey calls working morals (Dewey 2002).

  8. For Johnson, deliberation and valuation are both phenomena related to moral inquiry. Johnson describes this inquiry as a “need-search-recovery” process that aids the moral agent in regaining a “dynamic equilibrium” (Johnson 2020). This theoretical process is dependent upon the existence of psychological and perhaps evolutionary motivations—something present in humans and integral to the moral experience. Whether or not a machine should or could be programmed with a function that mimics the human desire for homeostasis is not something I will address in the remainder of this paper. However, we might consider how an emulated “desire” to maintain a particular state might further liberate artificial intelligence from human control. Whether or not this type of programming would be ethical is a topic worth further investigation.

  9. In Mark Johnson’s chapter, “Dewey’s Radical Conception of Moral Cognition” of The Oxford Handbook of Dewey (2020), he discusses the dual-process model of moral appraisal. He contends that the vast majority of our moral appraisal occurs “beneath the level of conscious reflection and control” (Johnson 2020). This further distances Dewey’s conception of ethics from a purely rationalist conception of ethics wherein ethical decisions are made in accordance with principled reasons.

  10. A two-dimensional problem-solving video game where the player moves objects around their environment.

  11. See What Computers Still Can’t Do (1992) for Dreyfus’s account of the vast differences between human cognition and the computation of artificial intelligence.

References

  • Anderson, M., & Anderson, S. L. (2006). Guest editors’ introduction: Machine ethics. IEEE Intelligent Systems, 21(4), 10–11.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge, England: Cambridge University Press.

    Google Scholar 

  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563(7729), 59–64.

    Article  Google Scholar 

  • Baum, S. D. (2017). Social choice ethics in artificial intelligence. AI and Society. https://doi.org/10.1007/s00146-017-0760-1.

    Article  Google Scholar 

  • Clark, A. (2003). Natural-born cyborgs: why minds and technologies are made to merge. New York, NY: Oxford University Press.

    Google Scholar 

  • Dewey, J. (2002). Human nature and conduct. Mineola, NY: Dover Publications Inc.

    Google Scholar 

  • Dreyfus, H. L. (1992). What computers still can’t do. Cambridge, MA: MIT Press.

    Google Scholar 

  • Fesmire, S. (2003). John Dewey and moral imagination: Pragmatism in ethics. Bloomington, IN: Indiana University Press.

    Google Scholar 

  • Johnson, M. (2015). Morality for humans: Ethical understanding from the perspective of cognitive science. Chicago, IL: University of Chicago Press.

    Google Scholar 

  • Johnson, M. (2020). Dewey’s radical conception of moral cognition (pp. 175–183)., The Oxford Handbook of Dewey New York, NY: Oxford University Press.

    Google Scholar 

  • McLaren, B. M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4), 29–37.

    Article  Google Scholar 

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  • Racanière, S., Weber, T., Reichert, D., Buesing, L., Guez, A., Rezende, D. J., Badia, A.P., Vinyals, O., Heess, N., Li, Y., Pascanu, R., Battaglia, P., Hassabis, D., Silver, D., Wierstra, D. (2017). Imagination-augmented agents for deep reinforcement learning. Advances in neural information processing systems, http://papers.nips.cc/paper/7152-imagination-augmented-agents-for-deep-reinforcement-learning.pdf.

  • Rahwan, I. (2017, August). Ted Talk. Retrieved from https://www.ted.com/speakers/iyad_rahwan. Accessed 28 August 2019.

  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438.

    Article  Google Scholar 

  • Verbeek, P.-P. (2011). Moralizing technology: understanding and designing the morality of things. Chicago, IL: University of Chicago Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Pinka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pinka, R. Synthetic Deliberation: Can Emulated Imagination Enhance Machine Ethics?. Minds & Machines 31, 121–136 (2021). https://doi.org/10.1007/s11023-020-09531-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-020-09531-w

Keywords

Navigation