当前位置: X-MOL 学术South African Journal of Philosophy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Do others mind? Moral agents without mental states
South African Journal of Philosophy Pub Date : 2021-06-29 , DOI: 10.1080/02580136.2021.1925841
Fabio Tollon 1
Affiliation  

As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities that have internal mental states. Against this thesis I argue that the requirement of internal states is philosophically unsound as it runs up against the problem of other minds. In place of intentional accounts, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with moral situations in which no human being is responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.



中文翻译:

别人介意吗?没有精神状态的道德主体

随着技术的进步和人工代理 (AA) 变得越来越自主,开始体现与道德相关的价值观并根据这些价值观采取行动,出现了是否应将这些实体视为人工道德代理 (AMA) 的问题。支持 AMA 的主要方式有两种:使用有意标准或使用功能标准。在本文中,我对 AMA 的“故意”帐户进行了阐述和批判。这些说法声称道德能动性应该只给予具有内部心理状态的实体。针对这个论点,我认为内部状态的要求在哲学上是不合理的,因为它遇到了其他思想的问题。我提供了一个功能主义的替代方案,而不是有意的解释,这为 AMA 的存在提供了概念空间。本论文的含义是,在未来的某个时刻,我们可能会面临没有人负责但机器可能负责的道德情况。此外,我声称,这项责任与所涉代理人是否“应受惩罚”无关。

更新日期:2021-06-29
down
wechat
bug