当前位置: X-MOL 学术Comput. Graph. Forum › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DFR: Differentiable Function Rendering for Learning 3D Generation from Images
Computer Graphics Forum ( IF 2.7 ) Pub Date : 2020-08-01 , DOI: 10.1111/cgf.14082
Yunjie Wu 1 , Zhengxing Sun 1
Affiliation  

Learning‐based 3D generation is a popular research field in computer graphics. Recently, some works adapted implicit function defined by a neural network to represent 3D objects and have become the current state‐of‐the‐art. However, training the network requires precise ground truth 3D data and heavy pre‐processing, which is unrealistic. To tackle this problem, we propose the DFR, a differentiable process for rendering implicit function representation of 3D objects into 2D images. Briefly, our method is to simulate the physical imaging process by casting multiple rays through the image plane to the function space, aggregating all information along with each ray, and performing a differentiable shading according to every ray's state. Some strategies are also proposed to optimize the rendering pipeline, making it efficient both in time and memory to support training a network. With DFR, we can perform many 3D modeling tasks with only 2D supervision. We conduct several experiments for various applications. The quantitative and qualitative evaluations both demonstrate the effectiveness of our method.

中文翻译:

DFR:用于从图像学习 3D 生成的可微函数渲染

基于学习的 3D 生成是计算机图形学中一个流行的研究领域。最近,一些工作采用神经网络定义的隐函数来表示 3D 对象,并成为当前最先进的技术。然而,训练网络需要精确的地面实况 3D 数据和繁重的预处理,这是不现实的。为了解决这个问题,我们提出了 DFR,一种将 3D 对象的隐函数表示渲染为 2D 图像的可微过程。简而言之,我们的方法是通过将多条光线穿过图像平面投射到函数空间,将所有信息与每条光线一起聚合,并根据每条光线的状态执行可微分着色来模拟物理成像过程。还提出了一些优化渲染管线的策略,使其在时间和内存上都有效地支持训练网络。使用 DFR,我们可以仅在 2D 监督下执行许多 3D 建模任务。我们针对各种应用进行了多项实验。定量和定性评估都证明了我们方法的有效性。
更新日期:2020-08-01
down
wechat
bug