当前位置: X-MOL 学术 › ACM Trans Comput Hum Interact › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
As Go the Feet … : On the Estimation of Attentional Focus from Stance.
ACM transactions on computer-human interaction : a publication of the Association for Computing Machinery Pub Date : 2008-01-01 , DOI: 10.1145/1452392.1452412
Francis Quek 1 , Roger Ehrich , Thurmon Lockhart
Affiliation  

The estimation of the direction of visual attention is critical to a large number of interactive systems. This paper investigates the cross-modal relation of the position of one's feet (or standing stance) to the focus of gaze. The intuition is that while one CAN have a range of attentional foci from a particular stance, one may be MORE LIKELY to look in specific directions given an approach vector and stance. We posit that the cross-modal relationship is constrained by biomechanics and personal style. We define a stance vector that models the approach direction before stopping and the pose of a subject's feet. We present a study where the subjects' feet and approach vector are tracked. The subjects read aloud contents of note cards in 4 locations. The order of `visits' to the cards were randomized. Ten subjects read 40 lines of text each, yielding 400 stance vectors and gaze directions. We divided our data into 4 sets of 300 training and 100 test vectors and trained a neural net to estimate the gaze direction given the stance vector. Our results show that 31% our gaze orientation estimates were within 5°, 51% of our estimates were within 10°, and 60% were within 15°. Given the ability to track foot position, the procedure is minimally invasive.

中文翻译:

As Go the Feet……:从姿势估计注意力集中度。

视觉注意力方向的估计对于大量交互系统至关重要。本文研究了一个人的脚的位置(或站立姿势)与注视焦点的跨模态关系。直觉是,虽然一个人可以从一个特定的姿势有一系列的注意力焦点,但一个人可能更有可能在给定接近向量和姿势的情况下看特定的方向。我们假设跨模态关系受生物力学和个人风格的限制。我们定义了一个姿态向量,用于模拟停止前的接近方向和主体脚的姿势。我们提出了一项研究,其中跟踪受试者的脚和接近向量。受试者在 4 个位置大声朗读笔记卡的内容。“访问”卡片的顺序是随机的。十名受试者每人阅读 40 行文本,产生 400 个姿态向量和注视方向。我们将我们的数据分为 4 组,每组 300 个训练和 100 个测试向量,并训练一个神经网络来估计给定姿态向量的凝视方向。我们的结果表明,31% 的注视方向估计值在 5° 以内,51% 的估计值在 10° 以内,60% 的估计值在 15° 以内。鉴于能够跟踪足部位置,该过程是微创的。
更新日期:2019-11-01
down
wechat
bug