当前位置: X-MOL 学术J. Neural Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decoding overt shifts of attention in depth through pupillary and cortical frequency tagging
Journal of Neural Engineering ( IF 3.7 ) Pub Date : 2021-03-09 , DOI: 10.1088/1741-2552/ab8e8f
Claudio de'Sperati 1, 2 , Silvestro Roatta 3 , Niccolò Zovetti 1, 4 , Tatiana Baroni 1
Affiliation  

Objective. We have recently developed a prototype of a novel human-computer interface for assistive communication based on voluntary shifts of attention (gaze) from a far target to a near target associated with a decrease of pupil size (Pupillary Accommodative Response, PAR), an automatic vegetative response that can be easily recorded. We report here an extension of that approach based on pupillary and cortical frequency tagging. Approach. In 18 healthy volunteers, we investigated the possibility of decoding attention shifts in depth by exploiting the evoked oscillatory responses of the pupil (Pupillary Oscillatory Response, POR, recorded through a low-cost device) and visual cortex (Steady-State Visual Evoked Potentials, SSVEP, recorded from 4 scalp electrodes). With a simple binary communication protocol (focusing on a far target meaning ‘No’, focusing on the near target meaning ‘Yes’), we aimed at discriminating when observer’s overt attention (gaze) shifted from the far to the near target, which were flickering at different frequencies. Main results. By applying a binary linear classifier (Support Vector Machine, SVM, with leave-one-out cross validation) to POR and SSVEP signals, we found that, with only twenty trials and no subjects’ behavioural training, the offline median decoding accuracy was 75% and 80% with POR and SSVEP signals, respectively. When the two signals were combined together, accuracy reached 83%. The number of observers for whom accuracy was higher than 70% was 11/18, 12/18 and 14/18 with POR, SVVEP and combined features, respectively. A signal detection analysis confirmed these results. Significance. The present findings suggest that exploiting frequency tagging with pupillary or cortical responses during an attention shift in the depth plane, either separately or combined together, is a promising approach to realize a device for communicating with Complete Locked-In Syndrome (CLIS) patients when oculomotor control is unreliable and traditional assistive communication, even based on PAR, is unsuccessful.



中文翻译:

通过瞳孔和皮层频率标记深度解码注意力的明显转移

客观。我们最近开发了一种用于辅助通信的新型人机界面原型,该界面基于与瞳孔大小减小相关的注意力(注视)从远处目标到近处目标的自愿转移(瞳孔调节反应,PAR),这是一种自动可以很容易地记录植物人的反应。我们在这里报告基于瞳孔和皮层频率标记的该方法的扩展。方法. 在 18 名健康志愿者中,我们通过利用瞳孔的诱发振荡反应(瞳孔振荡反应,POR,通过低成本设备记录)和视觉皮层(稳态视觉诱发电位, SSVEP,从 4 个头皮电极记录)。通过一个简单的二进制通信协议(关注远距离目标表示“否”,关注近目标表示“是”),我们旨在区分观察者的明显注意力(注视)何时从远处转移到近处目标,即以不同的频率闪烁。主要结果. 通过对 POR 和 SSVEP 信号应用二元线性分类器(支持向量机,SVM,带有留一法交叉验证),我们发现,只有 20 次试验且没有受试者的行为训练,离线中值解码精度为 75 POR 和 SSVEP 信号分别为 % 和 80%。当这两个信号结合在一起时,准确率达到了 83%。具有 POR、SVVEP 和组合特征的准确率高于 70% 的观察者人数分别为 11/18、12/18 和 14/18。信号检测分析证实了这些结果。意义. 目前的研究结果表明,在深度平面上的注意力转移期间利用瞳孔或皮层反应的频率标记,无论是单独还是结合在一起,都是一种很有前途的方法,可以在动眼神经时实现与完全闭锁综合征 (CLIS) 患者交流的装置控制是不可靠的,传统的辅助通信,即使是基于 PAR,也是不成功的。

更新日期:2021-03-09
down
wechat
bug