当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners
Ethics and Information Technology ( IF 3.633 ) Pub Date : 2021-05-27 , DOI: 10.1007/s10676-021-09598-8
Eva Weber-Guskar 1
Affiliation  

Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it. In this paper, I want to tackle these worries by focusing on the last aspect: in what sense could it be problematic or even wrong to establish an emotional relationship with EAI-systems? I want to show that the justifications for the widespread intuition concerning the problems are not as strong as they seem at first sight. To do so, I discuss three arguments: the argument from self-deception, the argument from lack of mutuality, and the argument from moral negligence.



中文翻译:

如何看待情绪化的人工智能?当机器人宠物、全息图和聊天机器人成为情感伙伴

包括人工智能在内的人机交互在生活的几乎所有领域都越来越普遍。同时,人工智能产品也越来越被赋予情感特征。也就是说,它们被设计和训练来引发人类的情绪,识别人类的情绪,有时,模拟情绪(EAI)。在我们的生活中引入这样的系统遭到了一些批评。有一种相当强烈的直觉认为,依恋一台机器,对它有某种情绪,以及与它建立一种情感关系是有问题的。在本文中,我想通过关注最后一个方面来解决这些担忧:与 EAI 系统建立情感关系在什么意义上会存在问题甚至是错误的?我想表明,关于这些问题的普遍直觉的理由并不像乍看起来那样有力。为此,我讨论了三个论点:自欺的论点、缺乏相互关系的论点和道德疏忽的论点。

更新日期:2021-05-27
down
wechat
bug