当前位置: X-MOL 学术Science and Engineering Ethics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
Science and Engineering Ethics ( IF 3.7 ) Pub Date : 2021-01-26 , DOI: 10.1007/s11948-021-00283-z
Christian Herzog 1
Affiliation  

In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature.



中文翻译:

警告过早实施人工道德代理以供实用和经济使用的三个风险

在本文中,我将提倡谨慎开发人工道德代理 (AMA),因为使用 AMA 的初步形式可能会对人类社会系统和人类道德思想本身及其价值产生负面反馈——例如,通过加强社会不平等,缩小所采用的伦理争论的广度和品格的价值。虽然对 AMA 的科学调查没有构成直接的重大威胁,但我会反对将它们过早用于实际和经济用途。我的论点将基于两个思想实验。第一个思想实验涉及生成个人道德立场副本的潜力,目的是提高我所说的“道德效率”。因此,作为第一个风险,在新自由主义资本主义体系中不受管制地使用过早的 AMA 可能会使那些负担不起“道德复制品”的人处于不利地位,并进一步加剧社会不平等。第二个思想实验涉及“道德计算器”的想法。作为第二个风险,我将争辩说,即使作为一种所有人都可以平等使用并旨在增强人类道德审议的设备,作为 AMA 的初步形式的“道德计算器”也可能会削弱道德论证中使用的概念的广度和深度。同样,我的这一主张基于当前最主要的经济体系奖励生产率提高的想法。然而,效率的提高主要源于对“道德计算器”的输出的依赖,而无需进一步审查。过早的 AMA 将仅涵盖有限范围的道德论证,因此,过度依赖它们会缩小人类的道德思想。此外,作为第三个风险,我将论证对道德主体内部的漠视可能会随之而来——这一趋势已经在文献中观察到。

更新日期:2021-01-28
down
wechat
bug