当前位置: X-MOL 学术J. Neural Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data augmentation for enhancing EEG-based emotion recognition with deep generative models
Journal of Neural Engineering ( IF 4 ) Pub Date : 2020-10-13 , DOI: 10.1088/1741-2552/abb580
Yun Luo 1 , Li-Zhen Zhu 1 , Zi-Yu Wan 1 , Bao-Liang Lu 1, 2, 3, 4
Affiliation  

Objective. The data scarcity problem in emotion recognition from electroencephalography (EEG) leads to difficulty in building an affective model with high accuracy using machine learning algorithms or deep neural networks. Inspired by emerging deep generative models, we propose three methods for augmenting EEG training data to enhance the performance of emotion recognition models. Approach. Our proposed methods are based on two deep generative models, variational autoencoder (VAE) and generative adversarial network (GAN), and two data augmentation ways, full and partial usage strategies. For the full usage strategy, all of the generated data are augmented to the training dataset without judging the quality of the generated data, while for the partial usage, only high-quality data are selected and appended to the training dataset. These three methods are called conditional Wasserstein GAN (cWGAN), selective VAE (sVAE), and selective WGAN (sWGAN). Main results.

中文翻译:

使用深度生成模型增强基于 EEG 的情绪识别的数据增强

客观的。脑电图(EEG)情感识别中的数据稀缺问题导致难以使用机器学习算法或深度神经网络构建高精度的情感模型。受新兴深度生成模型的启发,我们提出了三种增强 EEG 训练数据的方法,以提高情感识别模型的性能。方法。我们提出的方法基于两个深度生成模型,变分自动编码器(VAE)和生成对抗网络(GAN),以及两种数据增强方式,完全和部分使用策略。对于完全使用策略,所有生成的数据都被扩充到训练数据集中,而不判断生成数据的质量,而对于部分使用,只选择高质量的数据并附加到训练数据集中。这三种方法称为条件 Wasserstein GAN (cWGAN)、选择性 VAE (sVAE) 和选择性 WGAN (sWGAN)。主要结果。
更新日期:2020-10-15
down
wechat
bug