当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Inferring spatial relations from textual descriptions of images
Pattern Recognition ( IF 7.5 ) Pub Date : 2021-01-27 , DOI: 10.1016/j.patcog.2021.107847
Aitzol Elu , Gorka Azkune , Oier Lopez de Lacalle , Ignacio Arganda-Carreras , Aitor Soroa , Eneko Agirre

Generating an image from its textual description requires both a certain level of language understanding and common sense knowledge about the spatial relations of the physical entities being described. In this work, we focus on inferring the spatial relation between entities, a key step in the process of composing scenes based on text. More specifically, given a caption containing a mention to a subject and the location and size of the bounding box of that subject, our goal is to predict the location and size of an object mentioned in the caption. Previous work did not use the caption text information, but a manually provided relation holding between the subject and the object. In fact, the used evaluation datasets contain manually annotated ontological triplets but no captions, making the exercise unrealistic: a manual step was required; and systems did not leverage the richer information in captions. Here we present a system that uses the full caption, and Relations in Captions (REC-COCO), a dataset derived from MS-COCO which allows to evaluate spatial relation inference from captions directly. Our experiments show that: (1) it is possible to infer the size and location of an object with respect to a given subject directly from the caption; (2) the use of full text allows to place the object better than using a manually annotated relation. Our work paves the way for systems that, given a caption, decide which entities need to be depicted and their respective location and sizes, in order to then generate the final image.



中文翻译:

从图像的文字描述推断空间关系

根据其文字描述生成图像需要一定水平的语言理解和有关所描述物理实体空间关系的常识知识。在这项工作中,我们专注于推断实体之间的空间关系,这是基于文本组成场景的过程中的关键步骤。更具体地说,给定一个标题,其中包含对主题的提及以及该主题的边框的位置和大小,我们的目标是预测该标题中提及的对象的位置和大小。先前的工作没有使用字幕文本信息,而是使用手动提供的主题和对象之间的关系。实际上,使用的评估数据集包含手动注释的本体三元组,但没有标题,这使练习不现实:需要手动操作;并且系统没有利用字幕中的丰富信息。在这里,我们介绍一个使用完整字幕的系统,字幕关系(REC-COCO),这是一个从MS-COCO派生的数据集,可以直接从字幕评估空间关系推断。我们的实验表明:(1)可以直接从标题推断出对象相对于给定主题的大小和位置;(2)与使用手动注释的关系相比,使用全文允许更好地放置对象。我们的工作为确定标题的系统确定了需要描绘的实体及其各自的位置和大小,从而生成最终图像铺平了道路。

更新日期:2021-02-02
down
wechat
bug