当前位置: X-MOL 学术Int. J. Digit. Earth › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Identifying disaster related social media for rapid response: a visual-textual fused CNN architecture
International Journal of Digital Earth ( IF 3.7 ) Pub Date : 2019-06-23 , DOI: 10.1080/17538947.2019.1633425
Xiao Huang 1 , Zhenlong Li 1 , Cuizhen Wang 1 , Huan Ning 1
Affiliation  

In recent years, social media platforms have played a critical role in mitigation for a wide range of disasters. The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation. However, automatic retrieval of on-topic social media posts, especially considering both of their visual and textual information, remains a challenge. This paper presents an automatic approach to labeling on-topic social media posts using visual-textual fused features. Two convolutional neural networks (CNNs), Inception-V3 CNN and word embedded CNN, are applied to extract visual and textual features respectively from social media posts. Well-trained on our training sets, the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process. The results suggest that both CNNs perform remarkably well in learning visual and textual features. The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used. The on-topic posts, classified by their texts and pictures automatically, represent timely disaster documentation during an event. Coupling with rich spatial contexts when geotagged, social media could greatly aid in a variety of disaster mitigation approaches.



中文翻译:

识别与灾难相关的社交媒体以快速响应:视觉文字融合的CNN架构

近年来,社交媒体平台在缓解各种灾难中发挥了关键作用。数百万公民感应器的最新社会反应和广泛的空间覆盖范围,使得及时,全面的灾难调查成为可能。但是,自动检索主题社交媒体帖子,尤其是同时考虑其视觉和文字信息,仍然是一个挑战。本文提出了一种使用视觉文本融合功能来标记主题社交媒体帖子的自动方法。两种卷积神经网络(Inception-V3 CNN)和词嵌入CNN被用于从社交媒体帖子中分别提取视觉和文本特征。在我们的训练集上训练有素,提取的视觉和文本特征将进一步合并以形成融合特征,以提供最终的分类过程。结果表明,两个CNN在学习视觉和文字特征方面均表现出色。融合的功能证明,与仅使用文字功能的情况相比,附加的视觉功能可带来更高的鲁棒性。主题帖子按其文本和图片自动分类,代表事件发生时的及时灾难记录。当进行地理标记时,社交媒体与丰富的空间环境相结合,可以大大有助于各种减灾方法。融合的功能证明,与仅使用文字功能的情况相比,附加的视觉功能可带来更高的鲁棒性。主题帖子按其文本和图片自动分类,代表事件发生时的及时灾难记录。当进行地理标记时,社交媒体与丰富的空间环境相结合,可以大大有助于各种减灾方法。融合的功能证明,与仅使用文字功能的情况相比,附加的视觉功能可带来更高的鲁棒性。主题帖子按其文本和图片自动分类,代表事件发生时的及时灾难记录。当进行地理标记时,社交媒体与丰富的空间环境相结合,可以大大有助于各种减灾方法。

更新日期:2019-06-23
down
wechat
bug