当前位置:
X-MOL 学术
›
arXiv.cs.GR
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
arXiv - CS - Graphics Pub Date : 2021-04-15 , DOI: arxiv-2104.07660 Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
arXiv - CS - Graphics Pub Date : 2021-04-15 , DOI: arxiv-2104.07660 Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
Learning to model and reconstruct humans in clothing is challenging due to
articulation, non-rigid deformation, and varying clothing types and topologies.
To enable learning, the choice of representation is the key. Recent work uses
neural networks to parameterize local surface elements. This approach captures
locally coherent geometry and non-planar details, can deal with varying
topology, and does not require registered training data. However, naively using
such methods to model 3D clothed humans fails to capture fine-grained local
deformations and generalizes poorly. To address this, we present three key
innovations: First, we deform surface elements based on a human body model such
that large-scale deformations caused by articulation are explicitly separated
from topological changes and local clothing deformations. Second, we address
the limitations of existing neural surface elements by regressing local
geometry from local features, significantly improving the expressiveness.
Third, we learn a pose embedding on a 2D parameterization space that encodes
posed body geometry, improving generalization to unseen poses by reducing
non-local spurious correlations. We demonstrate the efficacy of our surface
representation by learning models of complex clothing from point clouds. The
clothing can change topology and deviate from the topology of the body. Once
learned, we can animate previously unseen motions, producing high-quality point
clouds, from which we generate realistic images with neural rendering. We
assess the importance of each technical contribution and show that our approach
outperforms the state-of-the-art methods in terms of reconstruction accuracy
and inference time. The code is available for research purposes at
https://qianlim.github.io/SCALE .
中文翻译:
规模:使用关节局部元素的表面编解码器模拟穿衣服的人
由于关节,非刚性变形以及服装类型和拓扑结构的变化,学习如何在服装中建模和重建人类具有挑战性。为了使学习成为可能,选择代表形式是关键。最近的工作使用神经网络对局部表面元素进行参数化。这种方法捕获局部一致的几何形状和非平面细节,可以处理变化的拓扑,并且不需要注册的训练数据。但是,天真地使用此类方法对3D衣服的人进行建模无法捕获细粒度的局部变形,并且泛化效果很差。为了解决这个问题,我们提出了三个关键的创新:首先,我们根据人体模型对表面元素进行变形,从而使由关节运动引起的大规模变形与拓扑变化和局部服装变形明确分离。第二,我们通过从局部特征回归局部几何形状来解决现有神经表面元素的局限性,从而显着提高了表达能力。第三,我们学习在2D参数化空间上嵌入的姿势,该空间对姿势的身体几何形状进行编码,并通过减少非局部虚假相关性来改进对看不见的姿势的泛化。我们通过从点云中学习复杂服装的模型来证明表面表示的功效。衣服可以改变拓扑结构并偏离身体的拓扑结构。一旦学会了,我们就可以对以前看不见的运动进行动画处理,从而生成高质量的点云,然后从中生成带有神经渲染的逼真的图像。我们评估了每种技术贡献的重要性,并表明我们的方法在重构精度和推理时间方面均优于最新方法。该代码可用于研究目的,网址为https://qianlim.github.io/SCALE。
更新日期:2021-04-16
中文翻译:
规模:使用关节局部元素的表面编解码器模拟穿衣服的人
由于关节,非刚性变形以及服装类型和拓扑结构的变化,学习如何在服装中建模和重建人类具有挑战性。为了使学习成为可能,选择代表形式是关键。最近的工作使用神经网络对局部表面元素进行参数化。这种方法捕获局部一致的几何形状和非平面细节,可以处理变化的拓扑,并且不需要注册的训练数据。但是,天真地使用此类方法对3D衣服的人进行建模无法捕获细粒度的局部变形,并且泛化效果很差。为了解决这个问题,我们提出了三个关键的创新:首先,我们根据人体模型对表面元素进行变形,从而使由关节运动引起的大规模变形与拓扑变化和局部服装变形明确分离。第二,我们通过从局部特征回归局部几何形状来解决现有神经表面元素的局限性,从而显着提高了表达能力。第三,我们学习在2D参数化空间上嵌入的姿势,该空间对姿势的身体几何形状进行编码,并通过减少非局部虚假相关性来改进对看不见的姿势的泛化。我们通过从点云中学习复杂服装的模型来证明表面表示的功效。衣服可以改变拓扑结构并偏离身体的拓扑结构。一旦学会了,我们就可以对以前看不见的运动进行动画处理,从而生成高质量的点云,然后从中生成带有神经渲染的逼真的图像。我们评估了每种技术贡献的重要性,并表明我们的方法在重构精度和推理时间方面均优于最新方法。该代码可用于研究目的,网址为https://qianlim.github.io/SCALE。