当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Scene Graphs for Dynamic Scenes
arXiv - CS - Graphics Pub Date : 2020-11-20 , DOI: arxiv-2011.10379
Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide

Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images. However, existing methods are restricted to learning efficient interpolations of static scenes that encode all scene objects into a single neural network, lacking the ability to represent dynamic scenes and decompositions into individual scene objects. In this work, we present the first neural rendering method that decomposes dynamic scenes into scene graphs. We propose a learned scene graph representation, which encodes object transformation and radiance, to efficiently render novel arrangements and views of the scene. To this end, we learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function. We assess the proposed method on synthetic and real automotive data, validating that our approach learns dynamic scenes - only by observing a video of this scene - and allows for rendering novel photo-realistic views of novel scene compositions with unseen sets of objects at unseen poses.

中文翻译:

动态场景的神经场景图

最近的隐式神经渲染方法表明,可以通过预测复杂的场景的体积密度和颜色(仅由一组RGB图像监控)来学习准确的视图合成。但是,现有方法仅限于学习静态场景的有效插值,该插值将所有场景对象编码为单个神经网络,缺乏表示动态场景并将其分解为单个场景对象的能力。在这项工作中,我们提出了第一种将动态场景分解成场景图的神经渲染方法。我们提出了一种学习型场景图表示形式,该图形式对对象变换和辐射度进行编码,以有效地渲染场景的新颖排列和视图。为此,我们学习隐式编码的场景,结合共同学习的潜在表示,以单个隐式函数描述对象。我们评估了在合成和真实汽车数据上的拟议方法,验证了我们的方法仅通过观察该场景的视频即可学习动态场景,并允许使用看不见的姿势以看不见的物体集渲染新颖场景构图的新颖逼真的视图。
更新日期:2020-11-23
down
wechat
bug