当前位置: X-MOL 学术ACM Trans. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
X-Fields
ACM Transactions on Graphics  ( IF 7.8 ) Pub Date : 2020-11-27 , DOI: 10.1145/3414685.3417827
Mojtaba Bemana 1 , Karol Myszkowski 1 , Hans-Peter Seidel 1 , Tobias Ritschel 2
Affiliation  

We suggest to represent an X-Field ---a set of 2D images taken across different view, time or illumination conditions, i.e., video, lightfield, reflectance fields or combinations thereof---by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the input to that rendering as an implicit map, that for any view, time, or light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position with respect to view, time, illumination, etc.). Our X-Field representation is trained for one scene within minutes, leading to a compact set of trainable parameters and hence real-time navigation in view, time and illumination.

中文翻译:

X 场

我们建议通过学习神经网络 (NN) 来表示 X 场——在不同视图、时间或照明条件(即视频、光场、反射场或其组合)下拍摄的一组 2D 图像他们的视图、时间或光坐标到 2D 图像。在新坐标处执行此 NN 会导致关节视图、时间或光插值。使这个可行的关键思想是一个神经网络,它已经知道硬编码和可微分形式的图形(照明、3D 投影、遮挡)的“基本技巧”。NN 将渲染的输入表示为隐式映射,对于任何视图、时间或光坐标以及任何像素,它都可以量化如果视图、时间或光坐标发生变化时它将如何移动(像素位置相对于视图的雅可比、时间、光照等)。
更新日期:2020-11-27
down
wechat
bug