当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 11-29-2022 , DOI: 10.1109/tcyb.2022.3222077
Mirko Rakovic 1 , Nuno Ferreira Duarte 2 , Jorge Marques 1 , Aude Billard 3 , Jose Santos-Victor 1
Affiliation  

When humans interact with each other, eye gaze movements have to support motor control as well as communication. On the one hand, we need to fixate the task goal to retrieve visual information required for safe and precise action-execution. On the other hand, gaze movements fulfil the purpose of communication, both for reading the intention of our interaction partners, as well as to signal our action intentions to others. We study this Gaze Dialogue between two participants working on a collaborative task involving two types of actions: 1) individual action and 2) action-in-interaction. We recorded the eye-gaze data of both participants during the interaction sessions in order to build a computational model, the Gaze Dialogue, encoding the interplay of the eye movements during the dyadic interaction. The model also captures the correlation between the different gaze fixation points and the nature of the action. This knowledge is used to infer the type of action performed by an individual. We validated the model against the recorded eye-gaze behavior of one subject, taking the eye-gaze behavior of the other subject as the input. Finally, we used the model to design a humanoid robot controller that provides interpersonal gaze coordination in human–robot interaction scenarios. During the interaction, the robot is able to: 1) adequately infer the human action from gaze cues; 2) adjust its gaze fixation according to the human eye-gaze behavior; and 3) signal nonverbal cues that correlate with the robot’s own action intentions.

中文翻译:


凝视对话模型:HHI 和 HRI 中的非语言交流



当人类彼此互动时,眼睛的注视运动必须支持运动控制和交流。一方面,我们需要固定任务目标,以检索安全、精确的动作执行所需的视觉信息。另一方面,凝视动作实现了沟通的目的,既可以解读我们互动伙伴的意图,也可以向他人传达我们的行动意图。我们研究了两个参与协作任务的参与者之间的凝视对话,涉及两种类型的动作:1)个人动作和2)交互中的动作。我们在交互过程中记录了双方参与者的眼睛注视数据,以便构建计算模型“注视对话”,对二元交互过程中眼睛运动的相互作用进行编码。该模型还捕获了不同注视点与动作性质之间的相关性。这些知识用于推断个人执行的操作类型。我们根据一个受试者记录的眼睛注视行为验证了模型,并将另一受试者的眼睛注视行为作为输入。最后,我们使用该模型设计了一个人形机器人控制器,该控制器在人机交互场景中提供人际注视协调。在交互过程中,机器人能够:1)根据目光线索充分推断人类的行为; 2)根据人眼的注视行为调整其注视点; 3)发出与机器人自身行动意图相关的非语言线索。
更新日期:2024-08-28
down
wechat
bug