当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Detecting depression in dyadic conversations with multimodal narratives and visualizations
arXiv - CS - Computation and Language Pub Date : 2020-01-13 , DOI: arxiv-2001.04809
Joshua Y. Kim, Greyson Y. Kim and Kalina Yacef

Conversations contain a wide spectrum of multimodal information that gives us hints about the emotions and moods of the speaker. In this paper, we developed a system that supports humans to analyze conversations. Our main contribution is the identification of appropriate multimodal features and the integration of such features into verbatim conversation transcripts. We demonstrate the ability of our system to take in a wide range of multimodal information and automatically generated a prediction score for the depression state of the individual. Our experiments showed that this approach yielded better performance than the baseline model. Furthermore, the multimodal narrative approach makes it easy to integrate learnings from other disciplines, such as conversational analysis and psychology. Lastly, this interdisciplinary and automated approach is a step towards emulating how practitioners record the course of treatment as well as emulating how conversational analysts have been analyzing conversations by hand.

中文翻译:

用多模态叙述和可视化检测二元对话中的抑郁症

对话包含广泛的多模态信息,为我们提供有关说话者情绪和情绪的提示。在本文中,我们开发了一个支持人类分析对话的系统。我们的主要贡献是识别适当的多模态特征并将这些特征整合到逐字对话记录中。我们展示了我们的系统能够接收广泛的多模态信息并自动生成个人抑郁状态的预测分数。我们的实验表明,这种方法比基线模型产生了更好的性能。此外,多模态叙事方法可以很容易地整合其他学科的知识,例如对话分析和心理学。最后,
更新日期:2020-01-29
down
wechat
bug