当前位置: X-MOL 学术Inf. Process. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal joint learning for personal knowledge base construction from Twitter-based lifelogs
Information Processing & Management ( IF 8.6 ) Pub Date : 2019-11-12 , DOI: 10.1016/j.ipm.2019.102148
An-Zi Yen , Hen-Hsen Huang , Hsin-Hsi Chen

People are used to log their life on social media platforms. In this paper, we aim to extract life events by leveraging both visual and textual information shared on Twitter and construct personal knowledge bases of individuals. The issues to be tackled include (1) not all text descriptions are related to life events, (2) life events in a text description can be expressed explicitly or implicitly, (3) the predicates in the implicit life events are often absent, and (4) the mapping from natural language predicates to knowledge base relations may be ambiguous. A multimodal joint learning approach trained on both text and images from social media posts shared on Twitter is proposed to detect life events in tweets and extract event components including subjects, predicates, objects, and time expressions. Finally, the extracted information is transformed to knowledge base facts. The evaluation is performed on a collection of lifelogs from 18 Twitter users. Experimental results show our proposed system is effective in life event extraction, and the constructed personal knowledge bases are expected to be useful to memory recall applications.



中文翻译:

通过基于Twitter的生活日志进行多模式联合学习以构建个人知识库

人们习惯在社交媒体平台上记录生活。在本文中,我们旨在通过利用Twitter上共享的视觉和文本信息来提取生活事件,并构建个人的个人知识库。要解决的问题包括:(1)并非所有文本描述都与生命事件相关;(2)文本描述中的生命事件可以显式或隐式表达;(3)隐式生命事件中的谓词通常不存在;以及(4)从自然语言谓词到知识库关系的映射可能不明确。提出了一种多模式联合学习方法,该方法通过在Twitter上共享的社交媒体帖子上的文本和图像进行训练,以检测推文中的生活事件并提取事件成分,包括主题,谓语,宾语和时间表达。最后,提取的信息将转换为知识库事实。评估是对来自18个Twitter用户的生活日志进行的。实验结果表明,我们提出的系统在生命事件提取中是有效的,构建的个人知识库有望对记忆调用应用程序有用。

更新日期:2020-04-21
down
wechat
bug