当前位置: X-MOL 学术eLife › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Experience transforms crossmodal object representations in the anterior temporal lobes
eLife ( IF 7.7 ) Pub Date : 2024-04-22 , DOI: https://doi.org/10.7554/elife.83382
Aedan Yue Li, Natalia Ladyka-Wojcik, Heba Qazilbash, Ali Golestani, Dirk Bernhardt-Walther, Chris B Martin, Morgan Barense

Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations - the crossmodal binding problem - remains poorly understood. Here, we applied multi-echo fMRI across a four-day paradigm, in which participants learned 3-dimensional crossmodal representations created from well-characterized unimodal visual shape and sound features. Our novel paradigm decoupled the learned crossmodal object representations from their baseline unimodal shapes and sounds, thus allowing us to track the emergence of crossmodal object representations as they were learned by healthy adults. Critically, we found that two anterior temporal lobe structures - temporal pole and perirhinal cortex - differentiated learned from non-learned crossmodal objects, even when controlling for the unimodal features that composed those objects. These results provide evidence for integrated crossmodal object representations in the anterior temporal lobes that were different from the representations for the unimodal features. Furthermore, we found that perirhinal cortex representations were by default biased towards visual shape, but this initial visual bias was attenuated by crossmodal learning. Thus, crossmodal learning transformed perirhinal representations such that they were no longer predominantly grounded in the visual modality, which may be a mechanism by which object concepts gain their abstraction.

中文翻译:

经验改变了前颞叶的跨模式对象表征

组合来自多种感官的信息对于物体识别至关重要,是学习概念、做出新推论以及跨不同实体进行概括的能力的核心。然而,大脑如何将感官输入结合成连贯的跨模态表征(跨模态绑定问题)仍然知之甚少。在这里,我们在为期四天的范例中应用了多回波功能磁共振成像,其中参与者学习了根据充分表征的单峰视觉形状和声音特征创建的 3 维跨模态表示。我们的新颖范式将学习的跨模态对象表示与其基线单模态形状和声音解耦,从而使我们能够跟踪健康成年人学习跨模态对象表示的出现。重要的是,我们发现两个前颞叶结构 - 颞极和嗅周皮层 - 与非学习的跨模态对象不同,即使在控制构成这些对象的单模态特征时也是如此。这些结果为前颞叶中的综合跨模态对象表示提供了证据,该表示与单模态特征的表示不同。此外,我们发现嗅周皮层表征默认偏向于视觉形状,但这种最初的视觉偏向通过跨模式学习得到了减弱。因此,跨模态学习改变了嗅周表征,使其不再主要基于视觉模态,而视觉模态可能是对象概念获得抽象的一种机制。
更新日期:2024-04-22
down
wechat
bug