当前位置: X-MOL 学术Comput. Graph. Forum › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Pixel‐Based Framework for Data‐Driven Clothing
Computer Graphics Forum ( IF 2.5 ) Pub Date : 2020-11-24 , DOI: 10.1111/cgf.14108
N. Jin 1 , Y. Zhu 2, 3 , Z. Geng 2 , R. Fedkiw 2, 3
Affiliation  

We propose a novel approach to learning cloth deformation as a function of body pose, recasting the graph‐like triangle mesh data structure into image‐based data in order to leverage popular and well‐developed convolutional neural networks (CNNs) in a two‐dimensional Euclidean domain. Then, a three‐dimensional animation of clothing is equivalent to a sequence of two‐dimensional RGB images driven/choreographed by time dependent joint angles. In order to reduce nonlinearity demands on the neural network, we utilize procedural skinning of the body surface to capture much of the rotation/deformation so that the RGB images only contain textures of displacement offsets from skin to clothing. Notably, we illustrate that our approach does not require accurate unclothed body shapes or robust skinning techniques. Additionally, we discuss how standard image based techniques such as image partitioning for higher resolution can readily be incorporated into our framework.

中文翻译:

基于像素的数据驱动服装框架

我们提出了一种学习作为身体姿势函数的衣服变形的新方法,将类似图形的三角形网格数据结构重铸为基于图像的数据,以便在二维中利用流行和完善的卷积神经网络 (CNN)。欧几里得域。然后,服装的三维动画等效于由时间相关的关节角度驱动/编排的二维 RGB 图像序列。为了减少对神经网络的非线性需求,我们利用身体表面的程序蒙皮来捕获大部分旋转/变形,以便 RGB 图像仅包含从皮肤到衣服的位移偏移的纹理。值得注意的是,我们说明我们的方法不需要准确的裸体形状或强大的蒙皮技术。此外,
更新日期:2020-11-24
down
wechat
bug