当前位置: X-MOL 学术Science and Engineering Ethics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?
Science and Engineering Ethics ( IF 2.7 ) Pub Date : 2021-06-29 , DOI: 10.1007/s11948-021-00318-5
Francisco Lara 1
Affiliation  

Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.



中文翻译:


当我们可以拥有苏格拉底时,为什么要使用虚拟助手来提高道德?



人工智能(AI)能否比人类指导更有效地提高人们的道德水平?作者认为,只有当这项技术的使用旨在提高个人反思性决策的能力,而不是直接影响行为时,情况才会如此。为了支持这一点,本文展示了对个人自主权的漠视如何使应用新技术(包括生物医学和人工智能)来提高道德的主要建议变得无效。作为这些建议的替代方案,本文提出了一种虚拟助手,通过对话、中立和虚拟现实技术,可以教会用户自己做出更好的道德决策。作者的结论是,只要在设计中采取一定的预防措施,这样的助手就能比采用相同教育方法的人类教师做得更好。

更新日期:2021-06-29
down
wechat
bug