当前位置: X-MOL 学术J. Neural Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
‘When’ and ‘what’ did you see? A novel fMRI-based visual decoding framework
Journal of Neural Engineering ( IF 4 ) Pub Date : 2020-10-13 , DOI: 10.1088/1741-2552/abb691
Chong Wang 1, 2 , Hongmei Yan 1 , Wei Huang 1 , Jiyi Li 1 , Jiale Yang 1 , Rong Li 1 , Leiyao Zhang 1 , Liang Li 1 , Jiang Zhang 3 , Zhentao Zuo 4 , Huafu Chen 1, 2, 5
Affiliation  

Objective. Visual perception decoding plays an important role in understanding our visual systems. Recent functional magnetic resonance imaging (fMRI) studies have made great advances in predicting the visual content of the single stimulus from the evoked response. In this work, we proposed a novel framework to extend previous works by simultaneously decoding the temporal and category information of visual stimuli from fMRI activities. Approach. 3 T fMRI data of five volunteers were acquired while they were viewing five categories of natural images with random presentation intervals. For each subject, we trained two classification-based decoding modules that were used to identify the occurrence time and semantic categories of the visual stimuli. In each module, we adopted recurrent neural network (RNN), which has proven to be highly effective for learning nonlinear representations from sequential data, for the analysis of the temporal dynamics of fMRI activity patterns. Finally, we integrated the two modules into a complete framework. Main results. The proposed framework shows promising decoding performance. The average decoding accuracy across five subjects was over 19 times the chance level. Moreover, we compared the decoding performance of the early visual cortex (eVC) and the high-level visual cortex (hVC). The comparison results indicated that both eVC and hVC participated in processing visual stimuli, but the semantic information of the visual stimuli was mainly represented in hVC. Significance. The proposed framework advances the decoding of visual experiences and facilitates a better understanding of our visual functions.



中文翻译:

“什么时候”和“什么”你看到了吗?一种新颖的基于 fMRI 的视觉解码框架

客观的。视觉感知解码在理解我们的视觉系统中起着重要作用。最近的功能磁共振成像 (fMRI) 研究在从诱发反应预测单一刺激的视觉内容方面取得了很大进展。在这项工作中,我们提出了一个新的框架,通过同时解码来自 fMRI 活动的视觉刺激的时间和类别信息来扩展以前的工作。方法。五名志愿者在查看五类具有随机呈现间隔的自然图像时获取了他们的 3 T fMRI 数据。对于每个主题,我们训练了两个基于分类的解码模块,用于识别视觉刺激的发生时间和语义类别。在每个模块中,我们采用循环神经网络 (RNN),它已被证明对于从序列数据中学习非线性表示非常有效,用于分析 fMRI 活动模式的时间动态。最后,我们将这两个模块集成到一个完整的框架中。主要结果。所提出的框架显示出有希望的解码性能。五名受试者的平均解码准确率是机会水平的 19 倍以上。此外,我们比较了早期视觉皮层(eVC)和高级视觉皮层(hVC)的解码性能。比较结果表明,eVC和hVC都参与了视觉刺激的处理,但视觉刺激的语义信息主要表现在hVC中。意义。所提出的框架推进了视觉体验的解码,并有助于更好地理解我们的视觉功能。

更新日期:2020-10-13
down
wechat
bug