当前位置: X-MOL 学术Cognit. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Analyzing Connections Between User Attributes, Images, and Text
Cognitive Computation ( IF 5.4 ) Pub Date : 2020-02-13 , DOI: 10.1007/s12559-019-09695-3
Laura Burdick , Rada Mihalcea , Ryan L. Boyd , James W. Pennebaker

This work explores the relationship between a person’s demographic/psychological traits (e.g., gender and personality) and self-identity images and captions. We use a dataset of images and captions provided by N ≈ 1350 individuals, and we automatically extract features from both the images and captions. We identify several visual and textual properties that show reliable relationships with individual differences between participants. The automated techniques presented here allow us to draw interesting conclusions from our data that would be difficult to identify manually, and these techniques are extensible to other large datasets. Additionally, we consider the task of predicting gender and personality using both single modality features and multimodal features. We show that a multimodal predictive approach outperforms purely visual methods and purely textual methods. We believe that our work on the relationship between user characteristics and user data has relevance in online settings, where users upload billions of images each day.



中文翻译:

分析用户属性,图像和文本之间的连接

这项工作探讨了一个人的人口统计学/心理特征(例如性别和个性)与自我形象和字幕之间的关系。我们使用由N提供的图像和标题的数据集≈1350个人,我们会自动从图像和字幕中提取特征。我们确定了几种视觉和文字属性,这些属性显示了参与者之间个体差异的可靠关系。这里介绍的自动化技术使我们能够从难以手动识别的数据中得出有趣的结论,并且这些技术可以扩展到其他大型数据集。此外,我们考虑使用单模态特征和多模态特征预测性别和个性的任务。我们表明,多模式预测方法优于纯视觉方法和纯文本方法。我们认为,我们关于用户特征与用户数据之间关系的工作与在线设置具有相关性,在线用户每天上传数十亿张图片。

更新日期:2020-04-20
down
wechat
bug