当前位置: X-MOL 学术Psychophysiology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deficient auditory emotion processing but intact emotional multisensory integration in alexithymia
Psychophysiology ( IF 3.7 ) Pub Date : 2021-03-20 , DOI: 10.1111/psyp.13806
Zhihao Wang 1, 2 , Mai Chen 3 , Katharina S Goerlich 2 , André Aleman 1, 2 , Pengfei Xu 4, 5, 6 , Yuejia Luo 1, 4, 5, 7, 8
Affiliation  

Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop-like audiovisual task while recording event-related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration-related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration.

中文翻译:

述情障碍中听觉情感处理缺陷但情感多感觉整合完整

述情障碍与听觉和视觉领域的情绪识别缺陷有关。尽管情绪在日常生活中本质上是多模态的,但对于与述情障碍相关的情绪多感觉整合 (eMSI) 异常知之甚少。在这里,我们在记录具有高述情障碍水平 (HA) 和低述情障碍水平 (LA) 的个体的事件相关电位 (ERP) 时采用了类似 Stroop 的情感视听任务。在任务期间,参与者必须指出声音是以悲伤还是愤怒的韵律说话,同时忽略同时呈现的静态脸,这可能在情感上与人声一致或不一致。我们发现 HA 表现更差,并且显示出比 LA 更高的 P2 振幅,与情绪一致性无关。此外,识别和描述感受的困难与 P2 成分呈正相关,而 P2 与行为表现呈负相关。贝叶斯统计显示 eMSI 和经典集成相关的 ERP 组件(N1 和 N2)没有组间差异。尽管患有述情障碍的个体确实表现出听觉情绪识别缺陷,表现为表现下降和 P2 振幅升高,但目前的研究结果表明,述情障碍患者具有整合来自多个渠道的情绪信息的完整能力。我们的工作为述情障碍与情绪多感觉整合的神经心理学机制之间的关系提供了宝贵的见解。贝叶斯统计显示 eMSI 和经典集成相关的 ERP 组件(N1 和 N2)没有组间差异。尽管患有述情障碍的个体确实表现出听觉情绪识别缺陷,表现为表现下降和 P2 振幅升高,但目前的研究结果表明,述情障碍患者具有整合来自多个渠道的情绪信息的完整能力。我们的工作为述情障碍与情绪多感觉整合的神经心理学机制之间的关系提供了宝贵的见解。贝叶斯统计显示 eMSI 和经典集成相关的 ERP 组件(N1 和 N2)没有组间差异。尽管患有述情障碍的个体确实表现出听觉情绪识别缺陷,表现为表现下降和 P2 振幅升高,但目前的研究结果表明,述情障碍患者具有整合来自多个渠道的情绪信息的完整能力。我们的工作为述情障碍与情绪多感觉整合的神经心理学机制之间的关系提供了宝贵的见解。目前的研究结果表明,在述情障碍中整合来自多个渠道的情绪信息的能力是完整的。我们的工作为述情障碍与情绪多感觉整合的神经心理学机制之间的关系提供了宝贵的见解。目前的研究结果表明,在述情障碍中整合来自多个渠道的情绪信息的能力是完整的。我们的工作为述情障碍与情绪多感觉整合的神经心理学机制之间的关系提供了宝贵的见解。
更新日期:2021-03-20
down
wechat
bug