当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
We need to talk about deception in social robotics!
Ethics and Information Technology ( IF 3.4 ) Pub Date : 2020-11-11 , DOI: 10.1007/s10676-020-09573-9
Amanda Sharkey , Noel Sharkey

Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework for sensitive robot applications.



中文翻译:

我们需要谈论社交机器人中的欺骗!

尽管一些作者声称欺骗需要意图,但我们认为社交机器人技术中可能存在欺骗,无论它是否是故意的。通过关注被欺骗者而不是欺骗者,我们建议可以在没有意图的情况下创建错误的信念。在人类和动物的例子中都发现了支持性证据。我们不认为仅当欺骗有利于欺骗者时才认为错误是错误的,而是建议社交机器人技术中的欺骗在对个人和社会造成有害影响时是错误的。机器人的外观和行为可能会导致对其功能的高估,或者导致感觉或认知上的错觉,从而加剧信任不当以及对弱势群体的照顾和陪伴等不当使用。我们考虑分配有害欺骗的责任。

更新日期:2021-01-13
down
wechat
bug