当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
utomatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database
Sensors ( IF 3.4 ) Pub Date : 2021-05-10 , DOI: 10.3390/s21093273
Ehsan Othman , Philipp Werner , Frerk Saxen , Ayoub Al-Hamadi , Sascha Gruss , Steffen Walter

Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively.

中文翻译:

X-ITE疼痛数据库上基于面部表情的疼痛强度的自动识别与人类识别

先前关于自动方法的研究表明,可以从视频中的正面识别疼痛的强度,而有一个假设是,与机器相比,人类非常擅长此任务。在本文中,我们通过比较两个人类观察者获得的结果与随机森林分类器(RFc)基线模型(称为RFc-BL)和三个提出的自动化模型获得的结果,来研究这种假设是否正确。首先提出的模型是随机森林对动作单位(AU)时间序列的分类描述符。第二个是修改后的MobileNetV2 CNN,用于分类结合了三个时间点的面部图像;第三个是自定义深度网络,它使用与MobileNetV2相同的输入加上RFc的知识,结合了两个CNN分支。我们使用X-ITE阶段性疼痛数据库进行实验,该数据库包括对热和电疼痛刺激(三种强度各一个)的录像响应。区分这六种刺激类型加上无刺激是人类观察者和自动化方法的主要7类分类任务。此外,我们进行了简化的5类和3类分类实验,应用了多任务学习,并提出了一种新建议的样本加权方法。实验结果表明,对人类观察者的疼痛评估比猜测的要好得多,并且比自动基线方法(RFc-BL)的执行效果要好大约1%。然而,由于面临挑战,即在实验研究中伦理上允许诱发的疼痛通常不会出现在面部反应中,因此人类的表现非常差。我们发现在训练过程中降低这些样本的权重可以提高所有样本的性能。拟议的RFc和两个CNN模型(使用拟议的样本权重)分别远胜于人类观察者约6%和7%。
更新日期:2021-05-10
down
wechat
bug