当前位置: X-MOL 学术J. Big Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A comparative analysis of machine learning methods for emotion recognition using EEG and peripheral physiological signals
Journal of Big Data ( IF 8.1 ) Pub Date : 2020-03-11 , DOI: 10.1186/s40537-020-00289-7
Vikrant Doma , Matin Pirouz

Emotion recognition using brain signals has the potential to change the way we identify and treat some health conditions. Difficulties and limitations may arise in general emotion recognition software due to the restricted number of facial expression triggers, dissembling of emotions, or among people with alexithymia. Such triggers are identified by studying the continuous brainwaves generated by human brain. Electroencephalogram (EEG) signals from the brain give us a more diverse insight on emotional states that one may not be able to express. Brainwave EEG signals can reflect the changes in electrical potential resulting from communications networks between neurons. This research involves analyzing the epoch data from EEG sensor channels and performing comparative analysis of multiple machine learning techniques [namely Support Vector Machine (SVM), K-nearest neighbor, Linear Discriminant Analysis, Logistic Regression and Decision Trees each of these models] were tested with and without principal component analysis (PCA) for dimensionality reduction. Grid search was also utilized for hyper-parameter tuning for each of the tested machine learning models over Spark cluster for lowered execution time. The DEAP Dataset was used in this study, which is a multimodal dataset for the analysis of human affective states. The predictions were based on the labels given by the participants for each of the 40 1-min long excerpts of music. music. Participants rated each video in terms of the level of arousal, valence, like/dislike, dominance and familiarity. The binary class classifiers were trained on the time segmented, 15 s intervals of epoch data, individually for each of the 4 classes. PCA with SVM performed the best and produced an F1-score of 84.73% with 98.01% recall in the 30th to 45th interval of segmentation. For each of the time segments and “a binary training class” a different classification model converges to a better accuracy and recall than others. The results prove that different classification models must be used to identify different emotional states.



中文翻译:

使用脑电图和周围生理信号进行情感识别的机器学习方法的比较分析

使用脑信号进行情感识别有可能改变我们识别和治疗某些健康状况的方式。由于面部表情触发器的数量有限,情绪分解或患有运动障碍的人,通用情绪识别软件可能会遇到困难和局限。通过研究人脑产生的连续脑电波可以识别出这些触发因素。来自大脑的脑电图(EEG)信号使我们对可能无法表达的情绪状态有了更多样化的了解。脑电波EEG信号可以反映神经元之间的通信网络导致的电势变化。这项研究涉及分析来自EEG传感器通道的历元数据,并进行多种机器学习技术的比较分析[分别测试了支持向量机(SVM),K近邻,线性判别分析,逻辑回归和决策树]。使用和不使用主成分分析(PCA)进行降维。网格搜索还用于通过Spark集群对每个经过测试的机器学习模型进行超参数调整,以缩短执行时间。在本研究中使用了DEAP数据集,它是用于分析人类情感状态的多模式数据集。这些预测是基于参与者为40首1分钟长的音乐摘录中的每一个摘录而定的标签。音乐。参与者按照唤醒,效价,喜欢/不喜欢,支配和熟悉。二元分类器分别针对4个类别中的每个类别,分别以15 s的时间段时间进行时间分段训练。具有SVM的PCA表现最好,在第30到第45个细分间隔中产生了84.73%的F1得分和98.01%的回忆率。对于每个时间段和“二进制训练课程”,一个不同的分类模型收敛到比其他模型更好的准确性和召回率。结果证明,必须使用不同的分类模型来识别不同的情绪状态。在30到45的细分间隔中,有01%的人回想起。对于每个时间段和“二进制训练课程”,一个不同的分类模型可以收敛,从而比其他模型具有更好的准确性和召回率。结果证明,必须使用不同的分类模型来识别不同的情绪状态。在30到45的细分间隔中,有01%的人回想起。对于每个时间段和“二进制训练课程”,一个不同的分类模型可以收敛,从而比其他模型具有更好的准确性和召回率。结果证明,必须使用不同的分类模型来识别不同的情绪状态。

更新日期:2020-04-21
down
wechat
bug