当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
TSception: Capturing Temporal Dynamics and Spatial Asymmetry From EEG for Emotion Recognition
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2022-04-22 , DOI: 10.1109/taffc.2022.3169001
Yi Ding 1 , Neethu Robinson 1 , Su Zhang 1 , Qiuhao Zeng 1 , Cuntai Guan 1
Affiliation  

The high temporal resolution and the asymmetric spatial activations are essential attributes of electroencephalogram (EEG) underlying emotional processes in the brain. To learn the temporal dynamics and spatial asymmetry of EEG towards accurate and generalized emotion recognition, we propose TSception, a multi-scale convolutional neural network that can classify emotions from EEG. TSception consists of dynamic temporal, asymmetric spatial, and high-level fusion layers, which learn discriminative representations in the time and channel dimensions simultaneously. The dynamic temporal layer consists of multi-scale 1D convolutional kernels whose lengths are related to the sampling rate of EEG, which learns the dynamic temporal and frequency representations of EEG. The asymmetric spatial layer takes advantage of the asymmetric EEG patterns for emotion, learning the discriminative global and hemisphere representations. The learned spatial representations will be fused by a high-level fusion layer. Using more generalized cross-validation settings, the proposed method is evaluated on two publicly available datasets DEAP and MAHNOB-HCI. The performance of the proposed network is compared with prior reported methods such as SVM, KNN, FBFgMDM, FBTSC, Unsupervised learning, DeepConvNet, ShallowConvNet, and EEGNet. TSception achieves higher classification accuracies and F1 scores than other methods in most of the experiments. The codes are available at: https://github.com/yi-ding-cs/TSception

中文翻译:

TSception:从脑电图捕获时间动态和空间不对称性以进行情绪识别

高时间分辨率和不对称空间激活是脑电图(EEG)的基本属性,是大脑情绪过程的基础。为了了解脑电图的时间动态和空间不对称性以实现准确和广义的情绪识别,我们提出了 TSception,一种可以对脑电图情绪进行分类的多尺度卷积神经网络。TSception 由动态时间层、不对称空间层和高级融合层组成,它们同时学习时间和通道维度上的判别表示。动态时间层由多尺度一维卷积核组成,其长度与脑电图采样率相关,学习脑电图的动态时间和频率表示。不对称空间层利用不对称脑电图模式来表达情感,学习有区别的全球和半球表示。学习到的空间表示将由高级融合层融合。使用更通用的交叉验证设置,在两个公开可用的数据集 DEAP 和 MAHNOB-HCI 上评估所提出的方法。所提出的网络的性能与先前报道的方法(例如 SVM、KNN、FBFgMDM、FBTSC、无监督学习、DeepConvNet、ShallowConvNet 和 EEGNet)进行了比较。在大多数实验中,TSception 比其他方法取得了更高的分类精度和 F1 分数。代码可在以下位置获取:该方法在两个公开可用的数据集 DEAP 和 MAHNOB-HCI 上进行了评估。所提出的网络的性能与先前报道的方法(例如 SVM、KNN、FBFgMDM、FBTSC、无监督学习、DeepConvNet、ShallowConvNet 和 EEGNet)进行了比较。在大多数实验中,TSception 比其他方法取得了更高的分类精度和 F1 分数。代码可在以下位置获取:该方法在两个公开可用的数据集 DEAP 和 MAHNOB-HCI 上进行了评估。所提出的网络的性能与先前报道的方法(例如 SVM、KNN、FBFgMDM、FBTSC、无监督学习、DeepConvNet、ShallowConvNet 和 EEGNet)进行了比较。在大多数实验中,TSception 比其他方法取得了更高的分类精度和 F1 分数。代码可在以下位置获取: https://github.com/yi-ding-cs/TSception
更新日期:2022-04-22
down
wechat
bug