当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantically congruent audiovisual integration with modal-based attention accelerates auditory short-term memory retrieval
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2022-05-31 , DOI: 10.3758/s13414-021-02437-4
Hongtao Yu , Aijun Wang , Ming Zhang , JiaJia Yang , Satoshi Takahashi , Yoshimichi Ejima , Jinglong Wu

Evidence has shown that multisensory integration benefits to unisensory perception performance are asymmetric and that auditory perception performance can receive more multisensory benefits, especially when the attention focus is directed toward a task-irrelevant visual stimulus. At present, whether the benefits of semantically (in)congruent multisensory integration with modal-based attention for subsequent unisensory short-term memory (STM) retrieval are also asymmetric remains unclear. Using a delayed matching-to-sample paradigm, the present study investigated this issue by manipulating the attention focus during multisensory memory encoding. The results revealed that both visual and auditory STM retrieval reaction times were faster under semantically congruent multisensory conditions than under unisensory memory encoding conditions. We suggest that coherent multisensory representation formation might be optimized by restricted multisensory encoding and can be rapidly triggered by subsequent unisensory memory retrieval demands. Crucially, auditory STM retrieval is exclusively accelerated by semantically congruent multisensory memory encoding, indicating that the less effective sensory modality of memory retrieval relies more on the coherent prior formation of a multisensory representation optimized by modal-based attention.



中文翻译:

语义一致的视听整合与基于模态的注意力加速了听觉短期记忆检索

有证据表明,多感官整合对非感官感知表现的好处是不对称的,听觉感知表现可以获得更多的多感官好处,尤其是当注意力集中在与任务无关的视觉刺激上时。目前,语义(非)一致的多感觉整合与基于模态的注意力对于随后的非感觉短期记忆(STM)检索的好处是否也是不对称的仍不清楚。本研究使用延迟的样本匹配范例,通过在多感官记忆编码期间操纵注意力焦点来研究这个问题。结果表明,在语义一致的多感觉条件下,视觉和听觉 STM 检索反应时间都比在非感觉记忆编码条件下更快。我们认为连贯的多感官表征形成可以通过受限的多感官编码进行优化,并且可以通过随后的非感官记忆检索需求快速触发。至关重要的是,听觉 STM 检索完全由语义一致的多感觉记忆编码加速,这表明记忆检索的不太有效的感觉模态更多地依赖于通过基于模态的注意力优化的多感觉表示的连贯先验形成。

更新日期:2022-06-01
down
wechat
bug