当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Relevance Prediction from Eye-movements Using Semi-interpretable Convolutional Neural Networks
arXiv - CS - Information Retrieval Pub Date : 2020-01-15 , DOI: arxiv-2001.05152
Nilavra Bhattacharya, Somnath Rakshit, Jacek Gwizdka, Paul Kogut

We propose an image-classification method to predict the perceived-relevance of text documents from eye-movements. An eye-tracking study was conducted where participants read short news articles, and rated them as relevant or irrelevant for answering a trigger question. We encode participants' eye-movement scanpaths as images, and then train a convolutional neural network classifier using these scanpath images. The trained classifier is used to predict participants' perceived-relevance of news articles from the corresponding scanpath images. This method is content-independent, as the classifier does not require knowledge of the screen-content, or the user's information-task. Even with little data, the image classifier can predict perceived-relevance with up to 80% accuracy. When compared to similar eye-tracking studies from the literature, this scanpath image classification method outperforms previously reported metrics by appreciable margins. We also attempt to interpret how the image classifier differentiates between scanpaths on relevant and irrelevant documents.

中文翻译:

使用半可解释卷积神经网络的眼动相关性预测

我们提出了一种图像分类方法,从眼动预测文本文档的感知相关性。进行了一项眼动追踪研究,参与者阅读简短的新闻文章,并将它们评为与回答触发问题相关或无关的内容。我们将参与者的眼动扫描路径编码为图像,然后使用这些扫描路径图像训练卷积神经网络分类器。经过训练的分类器用于从相应的扫描路径图像中预测参与者对新闻文章的感知相关性。此方法与内容无关,因为分类器不需要了解屏幕内容或用户的信息任务。即使数据很少,图像分类器也可以以高达 80% 的准确度预测感知相关性。与文献中的类似眼动追踪研究相比,这种扫描路径图像分类方法的性能明显优于先前报告的指标。我们还尝试解释图像分类器如何区分相关和不相关文档上的扫描路径。
更新日期:2020-01-16
down
wechat
bug