当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
arXiv - CS - Graphics Pub Date : 2021-06-24 , DOI: arxiv-2106.13228
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz

Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this "hyper-space". Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between "moments", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks by significant margins. Compared to Nerfies, HyperNeRF reduces average error rates by 8.6% for interpolation and 8.8% for novel-view synthesis, as measured by LPIPS.

中文翻译:

HyperNeRF:拓扑变化神经辐射场的高维表示

神经辐射场 (NeRF) 能够以前所未有的保真度重建场景,最近的各种工作已将 NeRF 扩展到处理动态场景。重建此类非刚性场景的常用方法是使用从每个输入图像中的坐标到规范模板坐标空间的学习变形场映射。然而,这些基于变形的方法难以模拟拓扑变化,因为拓扑变化需要变形场的不连续性,但这些变形场必须是连续的。我们通过将 NeRF 提升到更高维度的空间来解决这个限制,并将与每个单独输入图像对应的 5D 辐射场表示为通过这个“超空间”的切片。我们的方法受到水平集方法的启发,它将表面的演化建模为通过更高维表面的切片。我们在两个任务上评估我们的方法:(i)在“时刻”之间平滑插值,即在输入图像中看到的场景配置,同时保持视觉合理性,以及(ii)固定时刻的新视图合成。我们表明,我们称之为 HyperNeRF 的方法在这两个任务上都以显着的优势优于现有方法。与 Nerfies 相比,根据 LPIPS 的测量,HyperNeRF 将插值的平均错误率降低了 8.6%,将新视图合成的平均错误率降低了 8.8%。(ii) 固定时刻的新视角合成。我们表明,我们称之为 HyperNeRF 的方法在这两个任务上都以显着的优势优于现有方法。与 Nerfies 相比,根据 LPIPS 的测量,HyperNeRF 将插值的平均错误率降低了 8.6%,将新视图合成的平均错误率降低了 8.8%。(ii) 固定时刻的新视角合成。我们表明,我们称之为 HyperNeRF 的方法在这两个任务上都以显着的优势优于现有方法。与 Nerfies 相比,根据 LPIPS 的测量,HyperNeRF 将插值的平均错误率降低了 8.6%,将新视图合成的平均错误率降低了 8.8%。
更新日期:2021-06-25
down
wechat
bug