当前位置:
X-MOL 学术
›
arXiv.cs.DB
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
MONITOR: A Multimodal Fusion Framework to Assess Message Veracity in Social Networks
arXiv - CS - Databases Pub Date : 2021-09-06 , DOI: arxiv-2109.02271 Abderrazek AzriERIC, Cécile FavreERIC, Nouria HarbiERIC, Jérôme DarmontERIC, Camille Noûs
arXiv - CS - Databases Pub Date : 2021-09-06 , DOI: arxiv-2109.02271 Abderrazek AzriERIC, Cécile FavreERIC, Nouria HarbiERIC, Jérôme DarmontERIC, Camille Noûs
Users of social networks tend to post and share content with little
restraint. Hence, rumors and fake news can quickly spread on a huge scale. This
may pose a threat to the credibility of social media and can cause serious
consequences in real life. Therefore, the task of rumor detection and
verification has become extremely important. Assessing the veracity of a social
media message (e.g., by fact checkers) involves analyzing the text of the
message, its context and any multimedia attachment. This is a very
time-consuming task that can be much helped by machine learning. In the
literature, most message veracity verification methods only exploit textual
contents and metadata. Very few take both textual and visual contents, and more
particularly images, into account. In this paper, we second the hypothesis that
exploiting all of the components of a social media post enhances the accuracy
of veracity detection. To further the state of the art, we first propose using
a set of advanced image features that are inspired from the field of image
quality assessment, which effectively contributes to rumor detection. These
metrics are good indicators for the detection of fake images, even for those
generated by advanced techniques like generative adversarial networks (GANs).
Then, we introduce the Multimodal fusiON framework to assess message veracIty
in social neTwORks (MONITOR), which exploits all message features (i.e., text,
social context, and image features) by supervised machine learning. Such
algorithms provide interpretability and explainability in the decisions taken,
which we believe is particularly important in the context of rumor
verification. Experimental results show that MONITOR can detect rumors with an
accuracy of 96% and 89% on the MediaEval benchmark and the FakeNewsNet dataset,
respectively. These results are significantly better than those of
state-of-the-art machine learning baselines.
中文翻译:
MONITOR:一种用于评估社交网络中消息真实性的多模态融合框架
社交网络的用户倾向于不受限制地发布和共享内容。因此,谣言和假新闻可以迅速大规模传播。这可能对社交媒体的可信度构成威胁,并可能在现实生活中造成严重后果。因此,谣言检测和验证的任务变得极为重要。评估社交媒体消息的真实性(例如,通过事实核查人员)涉及分析消息的文本、上下文和任何多媒体附件。这是一项非常耗时的任务,机器学习可以提供很大帮助。在文献中,大多数消息真实性验证方法仅利用文本内容和元数据。很少有人同时考虑文本和视觉内容,尤其是图像。在本文中,我们支持这样一个假设,即利用社交媒体帖子的所有组成部分可以提高真实性检测的准确性。为了进一步发展现有技术,我们首先建议使用一组受图像质量评估领域启发的高级图像特征,这有效地有助于谣言检测。这些指标是检测假图像的良好指标,即使对于由生成对抗网络 (GAN) 等先进技术生成的图像也是如此。然后,我们引入了多模态融合框架来评估社交网络中的消息真实性(MONITOR),该框架通过监督机器学习利用所有消息特征(即文本、社交上下文和图像特征)。这样的算法在做出的决定中提供了可解释性和可解释性,我们认为这在谣言验证的背景下尤为重要。实验结果表明,MONITOR 在 MediaEval 基准和 FakeNewsNet 数据集上可以分别以 96% 和 89% 的准确率检测谣言。这些结果明显优于最先进的机器学习基线。
更新日期:2021-09-07
中文翻译:
MONITOR:一种用于评估社交网络中消息真实性的多模态融合框架
社交网络的用户倾向于不受限制地发布和共享内容。因此,谣言和假新闻可以迅速大规模传播。这可能对社交媒体的可信度构成威胁,并可能在现实生活中造成严重后果。因此,谣言检测和验证的任务变得极为重要。评估社交媒体消息的真实性(例如,通过事实核查人员)涉及分析消息的文本、上下文和任何多媒体附件。这是一项非常耗时的任务,机器学习可以提供很大帮助。在文献中,大多数消息真实性验证方法仅利用文本内容和元数据。很少有人同时考虑文本和视觉内容,尤其是图像。在本文中,我们支持这样一个假设,即利用社交媒体帖子的所有组成部分可以提高真实性检测的准确性。为了进一步发展现有技术,我们首先建议使用一组受图像质量评估领域启发的高级图像特征,这有效地有助于谣言检测。这些指标是检测假图像的良好指标,即使对于由生成对抗网络 (GAN) 等先进技术生成的图像也是如此。然后,我们引入了多模态融合框架来评估社交网络中的消息真实性(MONITOR),该框架通过监督机器学习利用所有消息特征(即文本、社交上下文和图像特征)。这样的算法在做出的决定中提供了可解释性和可解释性,我们认为这在谣言验证的背景下尤为重要。实验结果表明,MONITOR 在 MediaEval 基准和 FakeNewsNet 数据集上可以分别以 96% 和 89% 的准确率检测谣言。这些结果明显优于最先进的机器学习基线。