当前位置: X-MOL 学术Intel. Serv. Robotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Continuous emotion estimation of facial expressions on JAFFE and CK+ datasets for human–robot interaction
Intelligent Service Robotics ( IF 2.3 ) Pub Date : 2019-11-28 , DOI: 10.1007/s11370-019-00301-x
Hyun-Soon Lee , Bo-Yeong Kang

Human–robot interaction was always based on estimation of human emotions from human facial expressions, voice and gestures. Human emotions were always categorized in a discretized manner, while we estimate facial images from common datasets for continuous emotions. Linear regression was used in this study which numerically quantizes human emotions as valence and arousal by displaying the raw images on the two-respective coordinate axis. The face image datasets from the Japanese female facial expression (JAFFE) dataset and the extended Cohn–Kanade (CK+) dataset were used in this experiment. Human emotions for the above-mentioned datasets were interpreted by 85 participants who were used in the experimentation. The best result from a series of experiments shows that the minimum of root mean square error for the JAFFE dataset was 0.1661 for valence and 0.1379 for arousal. The proposed method has been compared with previous methods such as songs, sentences, and it is observed that the proposed method for common datasets testing showed an outstanding emotion estimation performance.

中文翻译:

用于人机交互的JAFFE和CK +数据集上的面部表情连续情感估计

人机交互始终基于从人类面部表情,声音和手势对人类情感的估计。人们的情绪总是以离散的方式进行分类,而我们从连续的情绪数据集中估计面部图像。本研究使用线性回归,通过在两个坐标轴上分别显示原始图像,将数字量化为价和唤醒的人类情感。本实验使用了日本女性面部表情(JAFFE)数据集和扩展的Cohn–Kanade(CK +)数据集的面部图像数据集。实验中使用的85位参与者对上述数据集的人类情感进行了解释。一系列实验的最佳结果表明,JAFFE数据集的均方根误差的最小值为0。价为1661,唤醒价为0.1379。将该方法与歌曲,句子等以前的方法进行了比较,发现该方法在常用数据集测试中表现出了优异的情感估计性能。
更新日期:2019-11-28
down
wechat
bug