当前位置: X-MOL 学术Int. J. Hum. Comput. Interact. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exposed by AIs! People Personally Witness Artificial Intelligence Exposing Personal Information and Exposing People to Undesirable Content
International Journal of Human-Computer Interaction ( IF 4.7 ) Pub Date : 2020-05-25 , DOI: 10.1080/10447318.2020.1768674
Daniel B. Shank 1 , Alexander Gott 1
Affiliation  

Do people personally witness artificial intelligence (AI) committing moral wrongs? If so, what kinds of moral wrong and what situations produce these? To address these questions, respondents selected one of six prompt questions, each based on a moral foundation violation, asking about a personally-witnessed interaction with an AI resulting in a moral victim (victim prompts) or where the AI seemed to engage in immoral actions (action prompt). Respondents then answered their selected question in an open-ended response. In conjunction with liberty/privacy and purity moral foundations and across both victim and action prompts, respondents most frequently reported moral violations as two types of exposure by AIs: their personal information being exposed (31%) and people’s exposure to undesirable content (20%). AIs expose people’s personal information to their colleagues, close relations, and online due to information sharing across devices, people in proximity of audio devices, and simple accidents. AIs expose people, often children, to undesirable content such as nudity, pornography, violence, and profanity due to their proximity to audio devices and to seemly purposeful action. We argue that the prominence in reporting these types of exposure may be due to their frequent occurrence on personal and home devices. This suggests that research on AI ethics should not only focus on the prototypically harmful moral dilemmas (e.g., autonomous vehicle deciding whom to sacrifice) but everyday interactions with personal technology.



中文翻译:

由AI公开!人们亲自见证人工智能暴露个人信息并使人们接触不良内容

人们是否亲眼目睹了犯道德错误的人工智能(AI)?如果是这样,什么样的道德错误和什么情况会导致这些错误?为了解决这些问题,受访者选择了六个即时问题之一,每个问题都基于违反道德基础的问题,询问与AI的亲身见证互动,从而导致道德受害者(受害者提示)或AI似乎在进行不道德行为(操作提示)。受访者随后在开放式回答中回答了他们选择的问题。结合自由/隐私和纯正的道德基础,以及在受害人和行动的提示中,受访者最常将道德违规行为报告为AI的两种暴露类型:其个人信息暴露(31%)和人们暴露于不良内容(20% )。由于设备之间,音频设备附近的人们之间的信息共享以及简单的事故,AI将人们的个人信息暴露给同事,亲密关系和在线。由于AI靠近音频设备并采取有目的的行动,它们使人们(通常是儿童)暴露于诸如裸体,色情,暴力和亵渎之类的不良内容中。我们认为,报告此类暴露的突出原因可能是由于它们在个人和家用设备上频繁发生。这表明,对AI伦理学的研究不仅应侧重于对原型有害的道德困境(例如,自动驾驶汽车决定牺牲谁),而且应与个人技术进行日常互动。和简单的事故。由于它们靠近音频设备并采取有目的的行动,因此,它们使人们(通常是儿童)暴露于诸如裸体,色情,暴力和亵渎之类的不良内容中。我们认为,报告这些类型的暴露的突出原因可能是由于它们经常在个人和家用设备上发生。这表明,对AI伦理学的研究不仅应着眼于对原型有害的道德困境(例如,自动驾驶车辆决定为谁牺牲),而且应与个人技术进行日常互动。和简单的事故。由于它们靠近音频设备并采取有目的的行动,因此,它们使人们(通常是儿童)暴露于诸如裸体,色情,暴力和亵渎之类的不良内容中。我们认为,报告这些类型的暴露的突出原因可能是由于它们经常在个人和家用设备上发生。这表明,对AI伦理学的研究不仅应着眼于对原型有害的道德困境(例如,自动驾驶车辆决定为谁牺牲),而且应与个人技术进行日常互动。我们认为,报告这些类型的暴露的突出原因可能是由于它们经常在个人和家用设备上发生。这表明,对AI伦理学的研究不仅应着眼于对原型有害的道德困境(例如,自动驾驶车辆决定为谁牺牲),而且应与个人技术进行日常互动。我们认为,报告这些类型的暴露的突出原因可能是由于它们经常在个人和家用设备上发生。这表明,对AI伦理学的研究不仅应着眼于对原型有害的道德困境(例如,自动驾驶车辆决定为谁牺牲),而且应与个人技术进行日常互动。

更新日期:2020-05-25
down
wechat
bug