当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Layout2image: Image Generation from Layout
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2020-02-24 , DOI: 10.1007/s11263-020-01300-7
Bo Zhao , Weidong Yin , Lili Meng , Leonid Sigal

Despite significant recent progress on generative models, controlled generation of images depicting multiple and complex object layouts is still a difficult problem. Among the core challenges are the diversity of appearance a given object may possess and, as a result, exponential set of images consistent with a specified layout. To address these challenges, we propose a novel approach for layout-based image generation; we call it Layout2Im. Given the coarse spatial layout (bounding boxes + object categories), our model can generate a set of realistic images which have the correct objects in the desired locations. The representation of each object is disentangled into a specified/certain part (category) and an unspecified/uncertain part (appearance). The category is encoded using a word embedding and the appearance is distilled into a low-dimensional vector sampled from a normal distribution. Individual object representations are composed together using convolutional LSTM, to obtain an encoding of the complete layout, and then decoded to an image. Several loss terms are introduced to encourage accurate and diverse image generation. The proposed Layout2Im model significantly outperforms the previous state-of-the-art, boosting the best reported inception score by 24.66% and 28.57% on the very challenging COCO-Stuff and Visual Genome datasets, respectively. Extensive experiments also demonstrate our model’s ability to generate complex and diverse images with many objects.

中文翻译:

Layout2image:从布局生成图像

尽管最近在生成模型方面取得了重大进展,但描绘多个复杂对象布局的图像的受控生成仍然是一个难题。核心挑战之一是给定对象可能具有的外观多样性,因此,与指定布局一致的指数图像集。为了应对这些挑战,我们提出了一种基于布局的图像生成新方法;我们称之为 Layout2Im。给定粗略的空间布局(边界框 + 对象类别),我们的模型可以生成一组逼真的图像,这些图像在所需位置具有正确的对象。每个对象的表示被分解为指定/特定部分(类别)和未指定/不确定部分(外观)。类别使用词嵌入进行编码,外观被提炼为从正态分布采样的低维向量。单个对象表示使用卷积 LSTM 组合在一起,以获得完整布局的编码,然后解码为图像。引入了几个损失项以鼓励准确和多样化的图像生成。拟议的 Layout2Im 模型显着优于之前的最新技术,在极具挑战性的 COCO-Stuff 和 Visual Genome 数据集上分别将最佳报告的初始分数提高了 24.66% 和 28.57%。大量实验还证明了我们的模型能够生成具有许多对象的复杂多样的图像。获取完整布局的编码,然后解码为图像。引入了几个损失项以鼓励准确和多样化的图像生成。拟议的 Layout2Im 模型显着优于之前的最新技术,在极具挑战性的 COCO-Stuff 和 Visual Genome 数据集上分别将最佳报告的初始分数提高了 24.66% 和 28.57%。大量实验还证明了我们的模型能够生成具有许多对象的复杂多样的图像。获取完整布局的编码,然后解码为图像。引入了几个损失项以鼓励准确和多样化的图像生成。拟议的 Layout2Im 模型显着优于之前的最新技术,在极具挑战性的 COCO-Stuff 和 Visual Genome 数据集上分别将最佳报告的初始分数提高了 24.66% 和 28.57%。大量实验还证明了我们的模型能够生成具有许多对象的复杂多样的图像。在极具挑战性的 COCO-Stuff 和 Visual Genome 数据集上分别达到 57%。大量实验还证明了我们的模型能够生成具有许多对象的复杂多样的图像。在极具挑战性的 COCO-Stuff 和 Visual Genome 数据集上分别达到 57%。大量实验还证明了我们的模型能够生成具有许多对象的复杂多样的图像。
更新日期:2020-02-24
down
wechat
bug