当前位置: X-MOL 学术J. Multimodal User Interfaces › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal analysis of personality traits on videos of self-presentation and induced behavior
Journal on Multimodal User Interfaces ( IF 2.2 ) Pub Date : 2020-11-02 , DOI: 10.1007/s12193-020-00347-7
Dersu Giritlioğlu , Burak Mandira , Selim Firat Yilmaz , Can Ufuk Ertenli , Berhan Faruk Akgür , Merve Kınıklıoğlu , Aslı Gül Kurt , Emre Mutlu , Şeref Can Gürel , Hamdi Dibeklioğlu

Personality analysis is an important area of research in several fields, including psychology, psychiatry, and neuroscience. With the recent dramatic improvements in machine learning, it has also become a popular research area in computer science. While the current computational methods are able to interpret behavioral cues (e.g., facial expressions, gesture, and voice) to estimate the level of (apparent) personality traits, accessible assessment tools are still substandard for practical use, not to mention the need for fast and accurate methods for such analyses. In this study, we present multimodal deep architectures to estimate the Big Five personality traits from (temporal) audio-visual cues and transcribed speech. Furthermore, for a detailed analysis of personality traits, we have collected a new audio-visual dataset, namely: Self-presentation and Induced Behavior Archive for Personality Analysis (SIAP). In contrast to the available datasets, SIAP introduces recordings of induced behavior in addition to self-presentation (speech) videos. With thorough experiments on SIAP and ChaLearn LAP First Impressions datasets, we systematically assess the reliability of different behavioral modalities and their combined use. Furthermore, we investigate the characteristics and discriminative power of induced behavior for personality analysis, showing that the induced behavior indeed includes signs of personality traits.



中文翻译:

自我表现和诱发行为视频中人格特征的多模态分析

人格分析是心理学,精神病学和神经科学等多个领域的重要研究领域。随着最近机器学习的巨大进步,它也已成为计算机科学领域的热门研究领域。尽管当前的计算方法能够解释行为线索(例如面部表情,手势和声音)以估计(明显)人格特征的水平,但可访问的评估工具仍不符合实际使用标准,更不用说需要快速以及进行此类分析的准确方法。在这项研究中,我们提出了多模式的深层架构,以根据(时间)视听提示和转录语音来估计五种人格特质。此外,为了详细分析人格特质,我们收集了一个新的视听数据集,即:自我呈现和诱发行为档案库,用于人格分析(SIAP)。与可用的数据集相比,SIAP除了自我演示(语音)视频之外,还引入了诱发行为的记录。通过对SIAP和ChaLearn LAP First Impressions数据集的全面实验,我们系统地评估了不同行为方式及其组合使用的可靠性。此外,我们调查诱导行为的特征和判别力用于人格分析,表明诱导行为确实包括人格特质的迹象。通过对SIAP和ChaLearn LAP First Impressions数据集的全面实验,我们系统地评估了不同行为方式及其组合使用的可靠性。此外,我们调查诱导行为的特征和判别力用于人格分析,表明诱导行为确实包括人格特质的迹象。通过对SIAP和ChaLearn LAP First Impressions数据集的全面实验,我们系统地评估了不同行为方式及其组合使用的可靠性。此外,我们调查诱导行为的特征和判别力用于人格分析,表明诱导行为确实包括人格特质的迹象。

更新日期:2020-11-02
down
wechat
bug