当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News
arXiv - CS - Artificial Intelligence Pub Date : 2020-09-16 , DOI: arxiv-2009.07698
Reuben Tan, Bryan A. Plummer, Kate Saenko

Large-scale dissemination of disinformation online intended to mislead or deceive the general population is a major societal problem. Rapid progression in image, video, and natural language generative models has only exacerbated this situation and intensified our need for an effective defense mechanism. While existing approaches have been proposed to defend against neural fake news, they are generally constrained to the very limited setting where articles only have text and metadata such as the title and authors. In this paper, we introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions. To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles as well as conduct a series of human user study experiments based on this dataset. In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies, which will serve as an effective first line of defense and a useful reference for future work in defending against machine-generated disinformation.

中文翻译:

检测跨模式不一致以防御神经假新闻

旨在误导或欺骗普通民众的大规模网络虚假信息传播是一个主要的社会问题。图像、视频和自然语言生成模型的快速发展只会加剧这种情况,并加剧了我们对有效防御机制的需求。虽然已经提出了现有的方法来防御神经假新闻,但它们通常仅限于非常有限的设置,即文章只有文本和元数据,例如标题和作者。在本文中,我们介绍了更现实和更具挑战性的任务,即防御机器生成的新闻,其中还包括图像和标题。为了识别对手可以利用的可能弱点,我们创建了一个由 4 种不同类型的生成文章组成的 NeuralNews 数据集,并基于该数据集进行了一系列人类用户研究实验。除了从我们的用户研究实验中收集到的宝贵见解之外,我们还提供了一种基于检测视觉语义不一致的相对有效的方法,这将作为有效的第一道防线,并为未来防御机器生成的工作提供有用的参考虚假信息。
更新日期:2020-10-22
down
wechat
bug