当前位置: X-MOL 学术arXiv.cs.CG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Future Urban Scenes Generation Through Vehicles Synthesis
arXiv - CS - Computational Geometry Pub Date : 2020-07-01 , DOI: arxiv-2007.00323
Alessandro Simoni and Luca Bergamini and Andrea Palazzi and Simone Calderara and Rita Cucchiara

In this work we propose a deep learning pipeline to predict the visual future appearance of an urban scene. Despite recent advances, generating the entire scene in an end-to-end fashion is still far from being achieved. Instead, here we follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently. We leverage a per-object novel view synthesis paradigm; i.e. generating a synthetic representation of an object undergoing a geometrical roto-translation in the 3D space. Our model can be easily conditioned with constraints (e.g. input trajectories) provided by state-of-the-art tracking methods or by the user itself. This allows us to generate a set of diverse realistic futures starting from the same input in a multi-modal fashion. We visually and quantitatively show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow, a challenging real world dataset.

中文翻译:

通过车辆合成生成未来城市场景

在这项工作中,我们提出了一个深度学习管道来预测城市场景的视觉未来外观。尽管最近取得了进展,但仍远未实现以端到端的方式生成整个场景。相反,这里我们遵循两阶段的方法,其中可解释的信息包含在循环中,并且每个参与者都独立建模。我们利用每个对象的新视图合成范式;即生成在 3D 空间中进行几何旋转平移的对象的合成表示。我们的模型可以很容易地通过最先进的跟踪方法或用户本身提供的约束(例如输入轨迹)进行调节。这使我们能够以多模式方式从相同的输入开始生成一组不同的现实未来。
更新日期:2020-10-23
down
wechat
bug