当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Motion In-betweening
arXiv - CS - Graphics Pub Date : 2021-02-09 , DOI: arxiv-2102.04942
Félix G. Harvey, Mike Yurick, Derek Nowrouzezahrai, Christopher Pal

In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks. The system synthesizes high-quality motions that use temporally-sparse keyframes as animation constraints. This is reminiscent of the job of in-betweening in traditional animation pipelines, in which an animator draws motion frames between provided keyframes. We first show that a state-of-the-art motion prediction model cannot be easily converted into a robust transition generator when only adding conditioning information about future keyframes. To solve this problem, we then propose two novel additive embedding modifiers that are applied at each timestep to latent representations encoded inside the network's architecture. One modifier is a time-to-arrival embedding that allows variations of the transition length with a single model. The other is a scheduled target noise vector that allows the system to be robust to target distortions and to sample different transitions given fixed keyframes. To qualitatively evaluate our method, we present a custom MotionBuilder plugin that uses our trained model to perform in-betweening in production scenarios. To quantitatively evaluate performance on transitions and generalizations to longer time horizons, we present well-defined in-betweening benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a novel high quality motion capture dataset that is more appropriate for transition generation. We are releasing this new dataset along with this work, with accompanying code for reproducing our baseline results.

中文翻译:

中间强劲的运动

在这项工作中,我们提出了一种新颖,强大的过渡生成技术,该技术可以基于对抗性递归神经网络用作3D动画制作者的新工具。该系统合成使用时间稀疏关键帧作为动画约束的高质量运动。这使人想起传统动画管道中的过渡工作,其中动画师在提供的关键帧之间绘制运动帧。我们首先表明,仅添加有关未来关键帧的条件信息时,最新的运动预测模型无法轻松转换为鲁棒的过渡生成器。为了解决这个问题,我们然后提出了两个新颖的加性嵌入修饰符,它们分别在每个时间步应用于网络架构内部编码的潜在表示。一个修改器是到达时间的嵌入,允许使用单个模型来改变过渡长度。另一个是计划的目标噪声矢量,它可使系统对于目标失真和在给定固定关键帧的情况下对不同过渡进行采样具有鲁棒性。为了定性评估我们的方法,我们提供了一个自定义的MotionBuilder插件,该插件使用我们训练有素的模型在生产场景中进行中间操作。为了定量评估过渡和泛化到较长时间范围内的性能,我们在广泛使用的Human3.6M数据集的子集和更适合过渡的新型高质量运动捕捉数据集LaFAN1上提供了定义明确的中间基准代。我们将随这项工作一起发布此新数据集,并附带用于重现基线结果的代码。
更新日期:2021-02-10
down
wechat
bug