当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using temporal features of observers' physiological measures to distinguish between genuine and fake smiles
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2020-01-01 , DOI: 10.1109/taffc.2018.2878029
Md Zakir Hossain , Tom Gedeon , Ramesh Sankaranarayana

Future affective computing research could be enhanced by enabling the computer to recognise a displayer's mental state from an observer's reaction (measured by physiological signals), using this information to improve recognition algorithms, and eventually to computer systems which are more responsive to human emotions. In this paper, an observer's physiological signals are analysed to distinguish displayers’ genuine from fake smiles. Overall, thirty smile videos were collected from four benchmark database and classified as showing genuine or fake smiles. Overall, forty observers viewed videos. We generally recorded four physiological signals: pupillary response (PR), electrocardiogram (ECG), galvanic skin response (GSR), and blood volume pulse (BVP). A number of temporal features were extracted after a few processing steps, and minimally correlated features between genuine and fake smiles were selected using the NCCA (canonical correlation analysis with neural network) system. Finally, classification accuracy was found to be as high as 98.8 percent from PR features using a leave-one-observer-out process. In comparison, the best current image processing technique [1] on the same video data was 95 percent correct. Observers were 59 percent (on average) to 90 percent (by voting) correct by their conscious choices. Our results demonstrate that humans can non-consciously (or emotionally) recognise the quality of smiles 4 percent better than current image processing techniques and 9 percent better than the conscious choices of groups.

中文翻译:

使用观察者生理测量的时间特征来区分真假微笑

未来的情感计算研究可以通过使计算机能够从观察者的反应(通过生理信号测量)识别显示者的心理状态,使用这些信息来改进识别算法,并最终应用到对人类情绪更敏感的计算机系统中来加强。在本文中,分析观察者的生理信号以区分显示者的真笑和假笑。总体而言,从四个基准数据库中收集了 30 个微笑视频,并将其分类为显示真笑和假笑。总共有 40 位观察者观看了视频。我们通常记录四种生理信号:瞳孔反应 (PR)、心电图 (ECG)、皮肤电反应 (GSR) 和血容量脉冲 (BVP)。经过几个处理步骤后提取了许多时间特征,使用 NCCA(神经网络规范相关分析)系统选择真笑和假笑之间的最小相关特征。最后,发现使用留一观察者过程的 PR 特征的分类准确率高达 98.8%。相比之下,针对相同视频数据的当前最佳图像处理技术 [1] 的正确率为 95%。观察者有意识的选择正确率为 59%(平均)到 90%(通过投票)。我们的结果表明,人类可以无意识地(或情感地)识别微笑质量,比当前的图像处理技术高 4%,比群体的有意识选择高 9%。
更新日期:2020-01-01
down
wechat
bug