当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Being the Center of Attention
ACM Transactions on Interactive Intelligent Systems ( IF 3.4 ) Pub Date : 2020-07-07 , DOI: 10.1145/3338245
Dario Dotti 1 , Mirela Popa 1 , Stylianos Asteriadis 1
Affiliation  

This article proposes a novel study on personality recognition using video data from different scenarios. Our goal is to jointly model nonverbal behavioral cues with contextual information for a robust, multi-scenario, personality recognition system. Therefore, we build a novel multi-stream Convolutional Neural Network (CNN) framework, which considers multiple sources of information. From a given scenario, we extract spatio-temporal motion descriptors from every individual in the scene, spatio-temporal motion descriptors encoding social group dynamics, and proxemics descriptors to encode the interaction with the surrounding context. All the proposed descriptors are mapped to the same feature space facilitating the overall learning effort. Experiments on two public datasets demonstrate the effectiveness of jointly modeling the mutual Person-Context information, outperforming the state-of-the art-results for personality recognition in two different scenarios. Last, we present CNN class activation maps for each personality trait, shedding light on behavioral patterns linked with personality attributes.

中文翻译:

成为关注的焦点

本文提出了一项使用来自不同场景的视频数据进行个性识别的新颖研究。我们的目标是将非语言行为线索与上下文信息联合建模,以建立一个强大的、多场景的个性识别系统。因此,我们构建了一个新颖的多流卷积神经网络 (CNN) 框架,该框架考虑了多个信息源。从给定的场景中,我们从场景中的每个人中提取时空运动描述符,编码社会群体动态的时空运动描述符,以及编码与周围上下文交互的近义描述符。所有提出的描述符都映射到相同的特征空间,以促进整体学习工作。在两个公共数据集上的实验证明了联合建模相互人物上下文信息的有效性,在两种不同的场景中优于最先进的人格识别结果。最后,我们展示了每个人格特征的 CNN 类激活图,揭示了与人格属性相关的行为模式。
更新日期:2020-07-07
down
wechat
bug