当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition.
Sensors ( IF 3.4 ) Pub Date : 2020-04-04 , DOI: 10.3390/s20072034
Yucel Cimtay 1 , Erhan Ekmekcioglu 1
Affiliation  

The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network's power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.

中文翻译:

研究在跨主题和跨数据集EEG情绪识别中使用预训练卷积神经网络。

脑电图(EEG)由于对人类的欺骗行为具有抵抗力,因此在情绪识别研究中具有极大的吸引力。与情绪识别环境中的视觉或语音信号相比,这是脑信号的最重要优势之一。基于EEG的情绪识别的一个主要挑战是,EEG录音在不同的时间点对不同的人以及同一个人表现出不同的分布。当受试者独立性为优先时,EEG的这种非平稳性质限制了它的准确性。这项研究的目的是通过利用预训练的先进卷积神经网络(CNN)架构来提高与主题无关的识别准确性。与从脑电图读数中提取频谱带功率特征的类似研究不同,在应用开窗,预调整和归一化后,原始EEG数据将用于我们的研究。从训练系统中删除手动特征提取可消除消除原始数据中隐藏特征的风险,并有助于利用深度神经网络的力量发现未知特征。为了进一步提高分类精度,使用中值滤波器来消除沿着情绪的预测间隔的错误检测。该方法在上海交通大学情感脑电数据集(SEED)上针对两个和三个情感类别的平均跨主题准确性分别为86.56%和78.34%。在使用生理信号(DEAP)进行情感分析的数据库上,该方法在跨学科平均准确度上也达到72.81%,而在81上也是如此。在拉夫堡大学多模式情感数据集(LUMED)上,针对两个情感课程,提高8%。此外,使用DEED数据集测试了使用SEED数据集训练的识别模型,该模型在所有主题和情感类别中的平均预测准确度均为58.1%。结果表明,就分类准确性而言,所提出的方法优于或等同于文献中确定的与参考受试者无关的EEG情绪识别研究,并且由于消除了特征提取的需要而具有有限的复杂性。
更新日期:2020-04-06
down
wechat
bug