当前位置: X-MOL 学术Int. J. Soc. Robotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Good Robots, Bad Robots: Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust
International Journal of Social Robotics ( IF 3.8 ) Pub Date : 2020-09-10 , DOI: 10.1007/s12369-020-00692-3
Jaime Banks

Both robots and humans can behave in ways that engender positive and negative evaluations of their behaviors and associated responsibility. However, extant scholarship on the link between agent evaluations and valenced behavior has generally treated moral behavior as a monolithic phenomenon and largely focused on moral deviations. In contrast, contemporary moral psychology increasingly considers moral judgments to unfold in relation to a number of moral foundations (care, fairness, authority, loyalty, purity, liberty) subject to both upholding and deviation. The present investigation seeks to discover whether social judgments of humans and robots emerge differently as a function of moral foundation-specific behaviors. This work is conducted in two studies: (1) an online survey in which agents deliver observed/mediated responses to moral dilemmas and (2) a smaller laboratory-based replication with agents delivering interactive/live responses. In each study, participants evaluate the goodness of and blame for six foundation-specific behaviors, and evaluate the agent for perceived mind, morality, and trust. Across these studies, results suggest that (a) moral judgments of behavior may be agent-agnostic, (b) all moral foundations may contribute to social evaluations of agents, and (c) physical presence and agent class contribute to the assignment of responsibility for behaviors. Findings are interpreted to suggest that bad behaviors denote bad actors, broadly, but machines bear a greater burden to behave morally, regardless of their credit- or blame-worthiness in a situation.



中文翻译:

好的机器人,坏的机器人:道德感知的行为对感知的思维,道德和信任的影响

机器人和人类的行为方式都可以对其行为和相关责任进行正面和负面评估。然而,关于代理人评估与有价行为之间联系的现存研究通常将道德行为视为一种整体现象,并且主要集中在道德偏差上。相比之下,当代道德心理学越来越多地认为道德判断与坚持和偏离的许多道德基础(照顾,公平,权威,忠诚,纯洁,自由)有关。本研究旨在发现人类和机器人的社会判断是否会因道德基础特定行为而有所不同。这项工作在两项研究中进行:(1)一项在线调查,其中代理人提供了观察到的/介导的对道德困境的反应,以及(2)较小的基于实验室的复制,代理人提供了交互式/实时反应。在每项研究中,参与者评估六种特定于基金会的行为的优劣,并评估感知的思想,道德和信任的主体。在所有这些研究中,结果表明:(a)行为的道德判断可能与主体无关,(b)所有道德基础都可能对主体的社会评价有所贡献,并且(c)身体存在和主体阶级有助于对责任的分配行为。研究结果被解释为表明不良行为广义上是指不良行为者,但是无论在何种情况下机器具有信誉或应受的责备,机器在道德上都会承担更大的负担。

更新日期:2020-09-11
down
wechat
bug