当前位置: X-MOL 学术Robomech J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lifelogging caption generation via fourth-person vision in a human–robot symbiotic environment
ROBOMECH Journal ( IF 1.5 ) Pub Date : 2020-09-24 , DOI: 10.1186/s40648-020-00181-2
Kazuto Nakashima , Yumi Iwashita , Ryo Kurazume

Automatic analysis of our daily lives and activities through a first-person lifelog camera provides us with opportunities to improve our life rhythms or to support our limited visual memories. Notably, to express the visual experiences, the task of generating captions from first-person lifelog images has been actively studied in recent years. First-person images involve scenes approximating what users actually see; therein, the visual cues are not enough to express the user’s context since the images are limited by his/her intention. Our challenge is to generate lifelog captions using a meta-perspective called “fourth-person vision”. The “fourth-person vision” is a novel concept which complementary exploits the visual information from the first-, second-, and third-person perspectives. First, we assume human–robot symbiotic scenarios that provide a second-person perspective from the camera mounted on the robot and a third-person perspective from the camera fixed in the symbiotic room. To validate our approach in this scenario, we collect perspective-aware lifelog videos and corresponding caption annotations. Subsequently, we propose a multi-perspective image captioning model composed of an image-wise salient region encoder, an attention module that adaptively fuses the salient regions, and a caption decoder that generates scene descriptions. We demonstrate that our proposed model based on the fourth-person concept can greatly improve the captioning performance against single- and double-perspective models.

中文翻译:

在人机共生环境中通过第四人称视角生成生活记录字幕

通过第一人称生命日志相机自动分析我们的日常生活和活动,为我们提供了改善生活节奏或支持有限的视觉记忆的机会。值得注意的是,为了表达视觉体验,近年来已经积极研究了从第一人称生活日志图像生成字幕的任务。第一人称图像涉及的场景近似于用户实际看到的内容;其中,视觉提示不足以表达用户的上下文,因为图像受到其意图的限制。我们的挑战是使用称为“第四人称视角”的元视角生成生活日志字幕。“第四人称视觉”是一个新颖的概念,它从第一,第二和第三人称视角互补地利用了视觉信息。第一,我们假设人机共生的场景从安装在机器人上的摄像头提供第二人称视角,并从固定在共生室内的摄像头提供第三人称视角。为了在这种情况下验证我们的方法,我们收集了感知角度的生活日志视频和相应的字幕注释。随后,我们提出了一个多视角的图像字幕模型,该模型由逐个图像的显着区域编码器,自适应融合显着区域的注意模块和生成场景描述的字幕解码器组成。我们证明,我们提出的基于第四人称视角的模型可以大大提高针对单视角和双视角模型的字幕性能。
更新日期:2020-09-25
down
wechat
bug