当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dimensional Affect Uncertainty Modelling for Apparent Personality Recognition
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2022-07-12 , DOI: 10.1109/taffc.2022.3189974
Mani Kumar Tellamekala 1 , Timo Giesbrecht 2 , Michel Valstar 1
Affiliation  

Despite achieving impressive performance, dimensional affect or emotion recognition from faces is largely based on uncertainty-unaware models that predict only point estimates. Modelling uncertainty is important to learn reliable facial emotion recognition models with the abilities to (a). holistically quantify predictive uncertainty estimates and (b). propagate those estimates to the benefit of downstream behavioural analysis tasks. In this work, we first quantify uncertainties in dimensional emotion recognition by adopting the framework of epistemic (model) and aleatoric (data) uncertainty categorisation. Then for evaluating the practical utility of uncertainty-aware emotion predictions, we introduce them in learning an important downstream task, apparent personality recognition. To this end, we ask two questions: how to effectively (a). use already known behavioural attributes (emotions) in a downstream task (personality recognition) and (b). summarise global temporal context from uncertainty-aware emotion predictions fused with image embeddings. Answering these questions, we learn a conditional latent variable model building on recently proposed neural latent variable models. Our experiments on two in-the-wild datasets, SEWA for emotion recognition and ChaLearn for personality recognition, demonstrate that fusion of epistemic and aleatoric emotion uncertainties significantly improves personality recognition performance, with $\sim$ 42% relative improvement in Pearson correlation coefficient, leading to a new state-of-the-art.

中文翻译:

表观人格识别的维度影响不确定性建模

尽管取得了令人印象深刻的表现,但面部的维度影响或情绪识别主要基于仅预测点估计的不确定性感知模型。建模不确定性对于学习具有 (a) 能力的可靠面部情绪识别模型很重要。整体量化预测不确定性估计和(b)。传播这些估计有利于下游行为分析任务。在这项工作中,我们首先通过采用认知(模型)和任意(数据)不确定性分类的框架来量化维度情绪识别中的不确定性。然后,为了评估不确定性情绪预测的实际效用,我们将它们引入学习一项重要的下游任务,即明显的人格识别。为此,我们提出两个问题:如何有效地(一)。在下游任务(人格识别)和 (b) 中使用已知的行为属性(情绪)。从与图像嵌入融合的不确定性感知情绪预测中总结全局时间上下文。为了回答这些问题,我们学习了一个基于最近提出的神经潜变量模型的条件潜变量模型。我们在两个野外数据集(用于情绪识别的 SEWA 和用于人格识别的 ChaLearn)上的实验表明,认知和任意情绪不确定性的融合显着提高了人格识别性能,我们学习了一个基于最近提出的神经潜变量模型的条件潜变量模型。我们在两个野外数据集(用于情绪识别的 SEWA 和用于人格识别的 ChaLearn)上的实验表明,认知和任意情绪不确定性的融合显着提高了人格识别性能,我们学习了一个基于最近提出的神经潜变量模型的条件潜变量模型。我们在两个野外数据集(用于情绪识别的 SEWA 和用于人格识别的 ChaLearn)上的实验表明,认知和任意情绪不确定性的融合显着提高了人格识别性能,$\模拟$ 皮尔逊相关系数相对提高了 42%,达到了新的最先进水平。
更新日期:2022-07-12
down
wechat
bug