当前位置: X-MOL 学术J. Big Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Four-class emotion classification in virtual reality using pupillometry
Journal of Big Data ( IF 8.1 ) Pub Date : 2020-07-06 , DOI: 10.1186/s40537-020-00322-9
Lim Jia Zheng , James Mountstephens , Jason Teo

Background

Emotion classification remains a challenging problem in affective computing. The large majority of emotion classification studies rely on electroencephalography (EEG) and/or electrocardiography (ECG) signals and only classifies the emotions into two or three classes. Moreover, the stimuli used in most emotion classification studies utilize either music or visual stimuli that are presented through conventional displays such as computer display screens or television screens. This study reports on a novel approach to recognizing emotions using pupillometry alone in the form of pupil diameter data to classify emotions into four distinct classes according to Russell’s Circumplex Model of Emotions, utilizing emotional stimuli that are presented in a virtual reality (VR) environment. The stimuli used in this experiment are 360° videos presented using a VR headset. Using an eye-tracker, pupil diameter is acquired as the sole classification feature. Three classifiers were used for the emotion classification which are Support Vector Machine (SVM), k-Nearest Neighbor (KNN), and Random Forest (RF).

Findings

SVM achieved the best performance for the four-class intra-subject classification task at an average of 57.05% accuracy, which is more than twice the accuracy of a random classifier. Although the accuracy can still be significantly improved, this study reports on the first systematic study on the use of eye-tracking data alone without any other supplementary sensor modalities to perform human emotion classification and demonstrates that even with a single feature of pupil diameter alone, emotions could be classified into four distinct classes to a certain level of accuracy. Moreover, the best performance for recognizing a particular class was 70.83%, which was achieved by the KNN classifier for Quadrant 3 emotions.

Conclusion

This study presents the first systematic investigation on the use of pupillometry as the sole feature to classify emotions into four distinct classes using VR stimuli. The ability to conduct emotion classification using pupil data alone represents a promising new approach to affective computing as new applications could be developed using readily-available webcams on laptops and other mobile devices that are equipped with cameras without the need for specialized and costly equipment such as EEG and/or ECG as the sensor modality.


中文翻译:

使用瞳孔测量法在虚拟现实中进行四类情感分类

背景

情感分类在情感计算中仍然是一个具有挑战性的问题。绝大多数情绪分类研究依赖于脑电图(EEG)和/或心电图(ECG)信号,并且仅将情绪分为两类或三类。此外,在大多数情绪分类研究中使用的刺激利用音乐或视觉刺激,这些刺激或视觉刺激是通过诸如计算机显示屏或电视屏幕之类的常规显示器呈现的。这项研究报告了一种新颖的方法,该方法仅使用瞳孔测量法以瞳孔直径数据的形式识别情绪,即可根据Russell的Circumplex情绪模型将情绪分为四个不同的类别,利用虚拟现实(VR)环境中呈现的情绪刺激。本实验中使用的刺激是使用VR头显呈现的360°视频。使用眼动仪,可以将瞳孔直径作为唯一的分类特征。情感分类使用了三个分类器,分别是支持向量机(SVM),k最近邻(KNN)和随机森林(RF)。

发现

SVM以47.05%的平均准确率实现了四类受试者内部分类任务的最佳性能,这是随机分类器的两倍以上。尽管仍可以显着提高准确性,但该研究报告了第一项仅使用眼动追踪数据的系统研究,没有使用任何其他辅助传感器来进行人类情感分类,并且表明即使仅具有瞳孔直径的单个特征,某种程度的准确度可以将情绪分为四个不同的类别。此外,识别特定类别的最佳性能为70.83%,这是通过针对象限3情感的KNN分类器实现的。

结论

这项研究提出了关于使用瞳孔测量法作为使用VR刺激将情绪分为四个不同类别的唯一特征的首次系统研究。仅使用瞳孔数据进行情感分类的能力就代表了一种有希望的情感计算新方法,因为可以使用便携式计算机和配备了摄像头的其他移动设备上的随时可用的网络摄像头来开发新的应用程序,而无需诸如此类的专用且昂贵的设备。 EEG和/或ECG作为传感器形式。
更新日期:2020-07-06
down
wechat
bug