当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generative Modelling of BRDF Textures from Flash Images
arXiv - CS - Graphics Pub Date : 2021-02-23 , DOI: arxiv-2102.11861
Philipp Henzler, Valentin Deschaintre, Niloy J. Mitra, Tobias Ritschel

We learn a latent space for easy capture, semantic editing, consistent interpolation, and efficient reproduction of visual material appearance. When users provide a photo of a stationary natural material captured under flash light illumination, it is converted in milliseconds into a latent material code. In a second step, conditioned on the material code, our method, again in milliseconds, produces an infinite and diverse spatial field of BRDF model parameters (diffuse albedo, specular albedo, roughness, normals) that allows rendering in complex scenes and illuminations, matching the appearance of the input picture. Technically, we jointly embed all flash images into a latent space using a convolutional encoder, and -- conditioned on these latent codes -- convert random spatial fields into fields of BRDF parameters using a convolutional neural network (CNN). We condition these BRDF parameters to match the visual characteristics (statistics and spectra of visual features) of the input under matching light. A user study confirms that the semantics of the latent material space agree with user expectations and compares our approach favorably to previous work.

中文翻译:

从Flash影像生成BRDF纹理的生成模型

我们学习了一个易于捕获,语义编辑,一致插值和有效再现视觉材料外观的潜在空间。当用户提供在闪光灯照明下捕获的静止自然材料的照片时,它将在几毫秒内转换为潜在的材料代码。在第二步中,以材料代码为条件,我们的方法又以毫秒为单位,产生了一个无限多样的BRDF模型参数空间场(漫反射反照率,镜面反照率,粗糙度,法线),从而可以在复杂的场景和照明中进行渲染,匹配输入图片的外观。从技术上讲,我们使用卷积编码器将所有Flash图像共同嵌入到潜在空间中,以及-以这些潜在代码为条件-使用卷积神经网络(CNN)将随机空间场转换为BRDF参数场。我们调节这些BRDF参数,使其在匹配光下与输入的视觉特征(视觉特征的统计信息和光谱)相匹配。一项用户研究证实,潜在材料空间的语义符合用户期望,并将我们的方法与以前的工作进行了比较。
更新日期:2021-02-24
down
wechat
bug