当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can Action be Imitated? Learn to Reconstruct and Transfer Human Dynamics from Videos
arXiv - CS - Multimedia Pub Date : 2021-07-25 , DOI: arxiv-2107.11756
Yuqian Fu, Yanwei Fu, Yu-Gang Jiang

Given a video demonstration, can we imitate the action contained in this video? In this paper, we introduce a novel task, dubbed mesh-based action imitation. The goal of this task is to enable an arbitrary target human mesh to perform the same action shown on the video demonstration. To achieve this, a novel Mesh-based Video Action Imitation (M-VAI) method is proposed by us. M-VAI first learns to reconstruct the meshes from the given source image frames, then the initial recovered mesh sequence is fed into mesh2mesh, a mesh sequence smooth module proposed by us, to improve the temporal consistency. Finally, we imitate the actions by transferring the pose from the constructed human body to our target identity mesh. High-quality and detailed human body meshes can be generated by using our M-VAI. Extensive experiments demonstrate the feasibility of our task and the effectiveness of our proposed method.

中文翻译:

动作可以模仿吗?学习从视频中重建和转移人体动力学

给定一个视频演示,我们可以模仿这个视频中包含的动作吗?在本文中,我们介绍了一项新任务,称为基于网格的动作模仿。此任务的目标是使任意目标人体网格能够执行视频演示中显示的相同操作。为此,我们提出了一种新颖的基于网格的视频动作模仿(M-VAI)方法。M-VAI 首先学习从给定的源图像帧重建网格,然后将初始恢复的网格序列输入到我们提出的网格序列平滑模块 mesh2mesh 中,以提高时间一致性。最后,我们通过将姿势从构建的人体转移到我们的目标身份网格来模仿动作。使用我们的 M-VAI 可以生成高质量和详细的人体网格。
更新日期:2021-07-27
down
wechat
bug