当前位置: X-MOL 学术Memory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deepfake false memories
Memory ( IF 2.519 ) Pub Date : 2021-04-28 , DOI: 10.1080/09658211.2021.1919715
Gillian Murphy 1 , Emma Flynn 1
Affiliation  

ABSTRACT

Machine-learning has enabled the creation of “deepfake videos”; highly-realistic footage that features a person saying or doing something they never did. In recent years, this technology has become more widespread and various apps now allow an average social-media user to create a deepfake video which can be shared online. There are concerns about how this may distort memory for public events, but to date no evidence to support this. Across two experiments, we presented participants (N = 682) with fake news stories in the format of text, text with a photograph or text with a deepfake video. Though participants rated the deepfake videos as convincing, dangerous, and unethical, and some participants did report false memories after viewing deepfakes, the deepfake video format did not consistently increase false memory rates relative to the text-only or text-with-photograph conditions. Further research is needed, but the current findings suggest that while deepfake videos can distort memory for public events, they may not always be more effective than simple misleading text.



中文翻译:

Deepfake虚假记忆

摘要

机器学习使“深度伪造视频”的创建成为可能;高度逼真的镜头,其中有一个人在说或做他们从未做过的事情。近年来,这项技术变得越来越普遍,现在各种应用程序允许普通社交媒体用户创建可以在线共享的深度伪造视频。有人担心这会如何扭曲公共事件的记忆,但迄今为止没有证据支持这一点。在两个实验中,我们展示了参与者(N = 682) 以文本格式、带有照片的文本或带有深度伪造视频的文本格式的假新闻报道。尽管参与者认为 deepfake 视频令人信服、危险且不道德,并且一些参与者在观看 deepfake 后确实报告了虚假记忆,但相对于纯文本或带照片的文本条件,deepfake 视频格式并没有持续增加虚假记忆率。需要进一步研究,但目前的研究结果表明,虽然 deepfake 视频会扭曲公共事件的记忆,但它们可能并不总是比简单的误导性文本更有效。

更新日期:2021-04-28
down
wechat
bug