当前位置: X-MOL 学术ACM Trans. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Light Transport for Relighting and View Synthesis
ACM Transactions on Graphics  ( IF 7.8 ) Pub Date : 2021-01-19 , DOI: 10.1145/3446328
Xiuming Zhang 1 , Sean Fanello 2 , Yun-Ta Tsai 2 , Tiancheng Sun 3 , Tianfan Xue 2 , Rohit Pandey 2 , Sergio Orts-Escolano 2 , Philip Davidson 2 , Christoph Rhemann 2 , Paul Debevec 2 , Jonathan T. Barron 2 , Ravi Ramamoorthi 3 , William T. Freeman 4
Affiliation  

The light transport (LT) of a scene describes how it appears under different lighting conditions from different viewing directions, and complete knowledge of a scene’s LT enables the synthesis of novel views under arbitrary lighting. In this article, we focus on image-based LT acquisition, primarily for human bodies within a light stage setup. We propose a semi-parametric approach for learning a neural representation of the LT that is embedded in a texture atlas of known but possibly rough geometry. We model all non-diffuse and global LT as residuals added to a physically based diffuse base rendering. In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint. This strategy allows the network to learn complex material effects (such as subsurface scattering) and global illumination (such as diffuse interreflection), while guaranteeing the physical correctness of the diffuse LT (such as hard shadows). With this learned LT, one can relight the scene photorealistically with a directional light or an HDRI map, synthesize novel views with view-dependent effects, or do both simultaneously, all in a unified framework using a set of sparse observations. Qualitative and quantitative experiments demonstrate that our Neural Light Transport (NLT) outperforms state-of-the-art solutions for relighting and view synthesis, without requiring separate treatments for both problems that prior work requires. The code and data are available at http://nlt.csail.mit.edu.

中文翻译:

用于重新照明和视图合成的神经光传输

场景的光传输 (LT) 描述了它在不同的照明条件下从不同的观察方向呈现的方式,并且对场景的 LT 的完整了解能够在任意照明下合成新的视图。在本文中,我们专注于基于图像的 LT 采集,主要用于灯光舞台设置中的人体。我们提出了一种半参数方法来学习嵌入在已知但可能粗糙几何的纹理图集中的 LT 的神经表示。我们将所有非漫反射和全局 LT 建模为添加到基于物理的漫反射基础渲染的残差。特别是,我们展示了如何融合先前看到的对光源和视图的观察,以从选定的视点在所需的照明条件下合成同一场景的新图像。该策略允许网络学习复杂的材质效果(如次表面散射)和全局光照(如漫反射互反射),同时保证漫反射 LT(如硬阴影)的物理正确性。有了这个学习的 LT,人们可以使用定向光或 HDRI 贴图以逼真的方式重新照亮场景,合成具有视图相关效果的新视图,或者同时进行这两种操作,所有这些都在使用一组稀疏观察的统一框架中完成。定性和定量实验表明,我们的神经光传输 (NLT) 在重新照明和视图合成方面优于最先进的解决方案,而无需对先前工作所需的这两个问题进行单独处理。代码和数据可在 http://nlt.csail.mit.edu 获得。
更新日期:2021-01-19
down
wechat
bug