当前位置: X-MOL 学术Behav. Res. Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm
Behavior Research Methods ( IF 4.6 ) Pub Date : 2021-03-22 , DOI: 10.3758/s13428-020-01513-1
Xi Wang 1 , Kenneth Holmqvist 2, 3, 4 , Marc Alexa 1
Affiliation  

We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.



中文翻译:

一种基于共识的弹性匹配算法,用于将召回注视映射到无视范例中的编码注视上

我们提出了一种算法方法,用于将召回注视与编码注视对齐,用于无视范例,这些范例要么记录沉默期间的召回眼球运动,要么希望通过语音期间的召回数据记录来加速数据分析。该算法利用一种新颖的基于共识的弹性匹配算法来估计哪些编码注视对应于后来的召回注视。这不是扫描路径比较方法,因为固定序列顺序被忽略并且仅使用位置配置。该算法具有三个内部参数,并且在很宽的参数值范围内相当稳定。然后,我们通过调查算法识别的召回对象是否与图像中哪些对象被标记为主观重要的独立评估相对应来评估我们算法的性能。我们的结果表明,映射的召回注视与图像的重要区域很好地对齐。这一结果在四组用例中得到了例证:调查低级视觉特征、面部、标志和文本以及不同大小的人在回忆编码场景中的作用。这些示例中的图证实了该算法将召回注视与图像中最可能的重要区域对齐的发现。示例还说明了该算法如何区分在静默回忆期间已固定的图像对象与那些没有被视觉注意的对象,

更新日期:2021-03-23
down
wechat
bug