当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Deformation Detail Synthesis for Thin Shell Models
arXiv - CS - Graphics Pub Date : 2021-02-23 , DOI: arxiv-2102.11541
Lan Chen, Lin Gao, Jie Yang, Shibiao Xu, Juntao Ye, Xiaopeng Zhang, Yu-Kun Lai

In physics-based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning. Data-driven techniques make efforts to reduce the computation significantly by a database. One type of methods relies on human poses to synthesize fitted garments which cannot be applied to general cloth. Another type of methods adds details to the coarse meshes without such restrictions. However, existing works usually utilize coordinate-based representations which cannot cope with large-scale deformation, and requires dense vertex correspondences between coarse and fine meshes. Moreover, as such methods only add details, they require coarse meshes to be close to fine meshes, which can be either impossible, or require unrealistic constraints when generating fine meshes. To address these challenges, we develop a temporally and spatially as-consistent-as-possible deformation representation (named TS-ACAP) and a DeformTransformer network to learn the mapping from low-resolution meshes to detailed ones. This TS-ACAP representation is designed to ensure both spatial and temporal consistency for sequential large-scale deformations from cloth animations. With this representation, our DeformTransformer network first utilizes two mesh-based encoders to extract the coarse and fine features, respectively. To transduct the coarse features to the fine ones, we leverage the Transformer network that consists of frame-level attention mechanisms to ensure temporal coherence of the prediction. Experimental results show that our method is able to produce reliable and realistic animations in various datasets at high frame rates: 10 ~ 35 times faster than physics-based simulation, with superior detail synthesis abilities than existing methods.

中文翻译:

薄壳模型的深变形细节综合

在基于物理的布料动画中,以昂贵的计算资源和大量的人工调整为代价实现了丰富的褶皱和细微的皱纹。数据驱动技术努力降低数据库的计算量。一种类型的方法依靠人体姿势来合成不能应用于普通衣服的合身服装。另一种类型的方法在没有这种限制的情况下向粗网格添加了细节。但是,现有的作品通常使用基于坐标的表示法,这种表示法不能应付大规模的变形,并且需要在粗网格和细网格之间建立密集的顶点对应。此外,由于这种方法仅添加细节,因此它们要求粗网格接近于细网格,这可能是不可能的,或者在生成细网格时需要不切实际的约束。为了应对这些挑战,我们开发了在时间和空间上尽可能一致的变形表示形式(称为TS-ACAP)和DeformTransformer网络,以学习从低分辨率网格到详细网格的映射。此TS-ACAP表示旨在确保从布料动画开始的连续大规模变形的空间和时间一致性。通过这种表示,我们的DeformTransformer网络首先利用两个基于网格的编码器分别提取粗略特征和精细特征。为了将粗略特征转化为精细特征,我们利用包含帧级关注机制的Transformer网络来确保预测的时间一致性。实验结果表明,我们的方法能够以高帧频在各种数据集中产生可靠且逼真的动画:
更新日期:2021-02-24
down
wechat
bug